WorldWideScience

Sample records for inaccurate geostatistical assumptions

  1. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  2. A geostatistical analysis of geostatistics

    NARCIS (Netherlands)

    Hengl, T.; Minasny, B.; Gould, M.

    2009-01-01

    The bibliometric indices of the scientific field of geostatistics were analyzed using statistical and spatial data analysis. The publications and their citation statistics were obtained from the Web of Science (4000 most relevant), Scopus (2000 most relevant) and Google Scholar (5389). The focus was

  3. Reducing reliance on inaccurate information.

    Science.gov (United States)

    Rapp, David N; Hinze, Scott R; Kohlhepp, Kristine; Ryskin, Rachel A

    2014-01-01

    People learn from the texts that they read, but sometimes what they read is wrong. Previous research has demonstrated that individuals encode even obvious inaccuracies, at times relying on the misinformation to complete postreading tasks. In the present study, we investigated whether the influence of inaccurate information might be reduced by encouraging the retrieval of accurate knowledge. Participants read an extended text that contained both accurate and inaccurate assertions, after which they evaluated the validity of statements associated with those assertions. In general, participants made more mistakes in their evaluations of statements after having read inaccurate as compared to accurate assertions, offering evidence of the influence of misinformation. However, when participants were tasked with correcting inaccuracies during reading, their mistakes were substantially reduced. Encouraging the retrieval of accurate knowledge during reading can reduce the influence of misinformation. These findings are discussed with respect to the contributions of episodic traces and prior knowledge on learning, as well as to the conditions that support successful comprehension.

  4. Bayesian Geostatistical Design

    DEFF Research Database (Denmark)

    Diggle, Peter; Lophaven, Søren Nymand

    2006-01-01

    This paper describes the use of model-based geostatistics for choosing the set of sampling locations, collectively called the design, to be used in a geostatistical analysis. Two types of design situation are considered. These are retrospective design, which concerns the addition of sampling...... locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...... parameter values are unknown. The results show that in this situation a wide range of interpoint distances should be included in the design, and the widely used regular design is often not the best choice....

  5. 10th International Geostatistics Congress

    CERN Document Server

    Rodrigo-Ilarri, Javier; Rodrigo-Clavero, María; Cassiraga, Eduardo; Vargas-Guzmán, José

    2017-01-01

    This book contains selected contributions presented at the 10th International Geostatistics Congress held in Valencia from 5 to 9 September, 2016. This is a quadrennial congress that serves as the meeting point for any engineer, professional, practitioner or scientist working in geostatistics. The book contains carefully reviewed papers on geostatistical theory and applications in fields such as mining engineering, petroleum engineering, environmental science, hydrology, ecology, and other fields.

  6. 4th International Geostatistics Congress

    CERN Document Server

    1993-01-01

    The contributions in this book were presented at the Fourth International Geostatistics Congress held in Tróia, Portugal, in September 1992. They provide a comprehensive account of the current state of the art of geostatistics, including recent theoretical developments and new applications. In particular, readers will find descriptions and applications of the more recent methods of stochastic simulation together with data integration techniques applied to the modelling of hydrocabon reservoirs. In other fields there are stationary and non-stationary geostatistical applications to geology, climatology, pollution control, soil science, hydrology and human sciences. The papers also provide an insight into new trends in geostatistics particularly the increasing interaction with many other scientific disciplines. This book is a significant reference work for practitioners of geostatistics both in academia and industry.

  7. 7th International Geostatistics Congress

    CERN Document Server

    Deutsch, Clayton

    2005-01-01

    The conference proceedings consist of approximately 120 technical papers presented at the Seventh International Geostatistics Congress held in Banff, Alberta, Canada in 2004. All the papers were reviewed by an international panel of leading geostatisticians. The five major sections are: theory, mining, petroleum, environmental and other applications. The first section showcases new and innovative ideas in the theoretical development of geostatistics as a whole; these ideas will have large impact on (1) the directions of future geostatistical research, and (2) the conventional approaches to heterogeneity modelling in a wide range of natural resource industries. The next four sections are focused on applications and innovations relating to the use of geostatistics in specific industries. Historically, mining, petroleum and environmental industries have embraced the use of geostatistics for uncertainty characterization, so these three industries are identified as major application areas. The last section is open...

  8. A practical primer on geostatistics

    Science.gov (United States)

    Olea, Ricardo A.

    2009-01-01

    The Challenge—Most geological phenomena are extraordinarily complex in their interrelationships and vast in their geographical extension. Ordinarily, engineers and geoscientists are faced with corporate or scientific requirements to properly prepare geological models with measurements involving a small fraction of the entire area or volume of interest. Exact description of a system such as an oil reservoir is neither feasible nor economically possible. The results are necessarily uncertain. Note that the uncertainty is not an intrinsic property of the systems; it is the result of incomplete knowledge by the observer.The Aim of Geostatistics—The main objective of geostatistics is the characterization of spatial systems that are incompletely known, systems that are common in geology. A key difference from classical statistics is that geostatistics uses the sampling location of every measurement. Unless the measurements show spatial correlation, the application of geostatistics is pointless. Ordinarily the need for additional knowledge goes beyond a few points, which explains the display of results graphically as fishnet plots, block diagrams, and maps.Geostatistical Methods—Geostatistics is a collection of numerical techniques for the characterization of spatial attributes using primarily two tools: probabilistic models, which are used for spatial data in a manner similar to the way in which time-series analysis characterizes temporal data, or pattern recognition techniques. The probabilistic models are used as a way to handle uncertainty in results away from sampling locations, making a radical departure from alternative approaches like inverse distance estimation methods.Differences with Time Series—On dealing with time-series analysis, users frequently concentrate their attention on extrapolations for making forecasts. Although users of geostatistics may be interested in extrapolation, the methods work at their best interpolating. This simple difference

  9. Gathering asychronous mobile robots with inaccurate compasses

    OpenAIRE

    Souissi, Samia; Defago, Xavier; Yamashita, Masafumi

    2006-01-01

    This paper considers a system of asynchronous autonomous mobile robots that can move freely in a twodimensional plane with no agreement on a common coordinate system. Starting from any initial configuration, the robots are required to eventually gather at a single point, not fixed in advance (gathering problem). Prior work has shown that gathering oblivious (i.e., stateless) robots cannot be achieved deterministically without additional assumptions. In particular, if robots can detect multipl...

  10. Memory conformity affects inaccurate memories more than accurate memories.

    Science.gov (United States)

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  11. Avoiding inaccurate interpretations when performing ultrasonic tests on welded joints

    International Nuclear Information System (INIS)

    Crostack, H.A.; Schuster, V.

    1992-01-01

    As reported in this article, conventional procedures for assessing errors when performing ultrasonic tests on weldes seams of ten no longer meet modern-day requirements. However, the safety of welded seams can be assessed considerably more accurately by adopting suitable assessment techniques, such as pattern recognition. This will mean that inaccurate interpretations can be ruled out to a large extent. (RHM) [de

  12. Book Review Geostatistical Analysis of Compositional Data

    Energy Technology Data Exchange (ETDEWEB)

    Carle, S F

    2007-03-26

    Compositional data are represented as vector variables with individual vector components ranging between zero and a positive maximum value representing a constant sum constraint, usually unity (or 100 percent). The earth sciences are flooded with spatial distributions of compositional data, such as concentrations of major ion constituents in natural waters (e.g. mole, mass, or volume fractions), mineral percentages, ore grades, or proportions of mutually exclusive categories (e.g. a water-oil-rock system). While geostatistical techniques have become popular in earth science applications since the 1970s, very little attention has been paid to the unique mathematical properties of geostatistical formulations involving compositional variables. The book 'Geostatistical Analysis of Compositional Data' by Vera Pawlowsky-Glahn and Ricardo Olea (Oxford University Press, 2004), unlike any previous book on geostatistics, directly confronts the mathematical difficulties inherent to applying geostatistics to compositional variables. The book righteously justifies itself with prodigious referencing to previous work addressing nonsensical ranges of estimated values and error, spurious correlation, and singular cross-covariance matrices.

  13. Geostatistics and spatial analysis in biological anthropology.

    Science.gov (United States)

    Relethford, John H

    2008-05-01

    A variety of methods have been used to make evolutionary inferences based on the spatial distribution of biological data, including reconstructing population history and detection of the geographic pattern of natural selection. This article provides an examination of geostatistical analysis, a method used widely in geology but which has not often been applied in biological anthropology. Geostatistical analysis begins with the examination of a variogram, a plot showing the relationship between a biological distance measure and the geographic distance between data points and which provides information on the extent and pattern of spatial correlation. The results of variogram analysis are used for interpolating values of unknown data points in order to construct a contour map, a process known as kriging. The methods of geostatistical analysis and discussion of potential problems are applied to a large data set of anthropometric measures for 197 populations in Ireland. The geostatistical analysis reveals two major sources of spatial variation. One pattern, seen for overall body and craniofacial size, shows an east-west cline most likely reflecting the combined effects of past population dispersal and settlement. The second pattern is seen for craniofacial height and shows an isolation by distance pattern reflecting rapid spatial changes in the midlands region of Ireland, perhaps attributable to the genetic impact of the Vikings. The correspondence of these results with other analyses of these data and the additional insights generated from variogram analysis and kriging illustrate the potential utility of geostatistical analysis in biological anthropology. (c) 2008 Wiley-Liss, Inc.

  14. Multiverse Assumptions and Philosophy

    OpenAIRE

    James R. Johnson

    2018-01-01

    Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong) topics such as: infinity, duplicate yous, hypothetical fields, mo...

  15. 3D Geostatistical Modeling and Uncertainty Analysis in a Carbonate Reservoir, SW Iran

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Kamali

    2013-01-01

    Full Text Available The aim of geostatistical reservoir characterization is to utilize wide variety of data, in different scales and accuracies, to construct reservoir models which are able to represent geological heterogeneities and also quantifying uncertainties by producing numbers of equiprobable models. Since all geostatistical methods used in estimation of reservoir parameters are inaccurate, modeling of “estimation error” in form of uncertainty analysis is very important. In this paper, the definition of Sequential Gaussian Simulation has been reviewed and construction of stochastic models based on it has been discussed. Subsequently ranking and uncertainty quantification of those stochastically populated equiprobable models and sensitivity study of modeled properties have been presented. Consequently, the application of sensitivity analysis on stochastic models of reservoir horizons, petrophysical properties, and stochastic oil-water contacts, also their effect on reserve, clearly shows any alteration in the reservoir geometry has significant effect on the oil in place. The studied reservoir is located at carbonate sequences of Sarvak Formation, Zagros, Iran; it comprises three layers. The first one which is located beneath the cap rock contains the largest portion of the reserve and other layers just hold little oil. Simulations show that average porosity and water saturation of the reservoir is about 20% and 52%, respectively.

  16. Multiverse Assumptions and Philosophy

    Directory of Open Access Journals (Sweden)

    James R. Johnson

    2018-02-01

    Full Text Available Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong topics such as: infinity, duplicate yous, hypothetical fields, more than three space dimensions, Hilbert space, advanced civilizations, and reality established by mathematical relationships. It is easy to confuse multiverse proposals because many divergent models exist. This overview defines the characteristics of eleven popular multiverse proposals. The characteristics compared are: initial conditions, values of constants, laws of nature, number of space dimensions, number of universes, and fine tuning explanations. Future scientific experiments may validate selected assumptions; but until they do, proposals by philosophers may be as valid as theoretical scientific theories.

  17. A Practical pedestrian approach to parsimonious regression with inaccurate inputs

    Directory of Open Access Journals (Sweden)

    Seppo Karrila

    2014-04-01

    Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.

  18. Endoscopic Localization of Colon Cancer Is Frequently Inaccurate.

    Science.gov (United States)

    Nayor, Jennifer; Rotman, Stephen R; Chan, Walter W; Goldberg, Joel E; Saltzman, John R

    2017-08-01

    Colonoscopic location of a tumor can influence both the surgical procedure choice and overall treatment strategy. To determine the accuracy of colonoscopy in determining the location of colon cancer compared to surgical localization and to elucidate factors that predict discordant colon cancer localization. We conducted a retrospective cross-sectional study of colon cancers diagnosed on colonoscopy at two academic tertiary-care hospitals and two affiliated community hospitals from 2012 to 2014. Colon cancer location was obtained from the endoscopic and surgical pathology reports and characterized by colon segment. We collected data on patient demographics, tumor characteristics, endoscopic procedure characteristics, surgery planned, and surgery performed. Univariate analyses using Chi-squared test and multivariate analysis using forward stepwise logistic regression were performed to determine factors that predict discordant colon cancer localization. There were 110 colon cancer cases identified during the study period. Inaccurate endoscopic colon cancer localization was found in 29% (32/110) of cases. These included 14 cases (12.7%) that were discordant by more than one colonic segment and three cases where the presurgical planned procedure was significantly changed at the time of surgery. On univariate analyses, right-sided colon lesions were associated with increased inaccuracy (43.8 vs 24.4%, p = 0.04). On multivariate analysis, right-sided colon lesions remained independently associated with inaccuracy (OR 1.74, 95% CI 1.03-2.93, p = 0.04). Colon cancer location as determined by colonoscopy is often inaccurate, which can result in intraoperative changes to surgical management, particularly in the right colon.

  19. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  20. Technology demonstration: geostatistical and hydrologic analysis of salt areas. Assessment of effectiveness of geologic isolation systems

    International Nuclear Information System (INIS)

    Doctor, P.G.; Oberlander, P.L.; Rice, W.A.; Devary, J.L.; Nelson, R.W.; Tucker, P.E.

    1982-09-01

    The Office of Nuclear Waste Isolation (ONWI) requested Pacific Northwest Laboratory (PNL) to: (1) use geostatistical analyses to evaluate the adequacy of hydrologic data from three salt regions, each of which contains a potential nuclear waste repository site; and (2) demonstrate a methodology that allows quantification of the value of additional data collection. The three regions examined are the Paradox Basin in Utah, the Permian Basin in Texas, and the Mississippi Study Area. Additional and new data became available to ONWI during and following these analyses; therefore, this report must be considered a methodology demonstration here would apply as illustrated had the complete data sets been available. A combination of geostatistical and hydrologic analyses was used for this demonstration. Geostatistical analyses provided an optimal estimate of the potentiometric surface from the available data, a measure of the uncertainty of that estimate, and a means for selecting and evaluating the location of future data. The hydrologic analyses included the calculation of transmissivities, flow paths, travel times, and ground-water flow rates from hypothetical repository sites. Simulation techniques were used to evaluate the effect of optimally located future data on the potentiometric surface, flow lines, travel times, and flow rates. Data availability, quality, quantity, and conformance with model assumptions differed in each of the salt areas. Report highlights for the three locations are given

  1. Technology demonstration: geostatistical and hydrologic analysis of salt areas. Assessment of effectiveness of geologic isolation systems

    Energy Technology Data Exchange (ETDEWEB)

    Doctor, P.G.; Oberlander, P.L.; Rice, W.A.; Devary, J.L.; Nelson, R.W.; Tucker, P.E.

    1982-09-01

    The Office of Nuclear Waste Isolation (ONWI) requested Pacific Northwest Laboratory (PNL) to: (1) use geostatistical analyses to evaluate the adequacy of hydrologic data from three salt regions, each of which contains a potential nuclear waste repository site; and (2) demonstrate a methodology that allows quantification of the value of additional data collection. The three regions examined are the Paradox Basin in Utah, the Permian Basin in Texas, and the Mississippi Study Area. Additional and new data became available to ONWI during and following these analyses; therefore, this report must be considered a methodology demonstration here would apply as illustrated had the complete data sets been available. A combination of geostatistical and hydrologic analyses was used for this demonstration. Geostatistical analyses provided an optimal estimate of the potentiometric surface from the available data, a measure of the uncertainty of that estimate, and a means for selecting and evaluating the location of future data. The hydrologic analyses included the calculation of transmissivities, flow paths, travel times, and ground-water flow rates from hypothetical repository sites. Simulation techniques were used to evaluate the effect of optimally located future data on the potentiometric surface, flow lines, travel times, and flow rates. Data availability, quality, quantity, and conformance with model assumptions differed in each of the salt areas. Report highlights for the three locations are given.

  2. Forecasting Interest Rates Using Geostatistical Techniques

    Directory of Open Access Journals (Sweden)

    Giuseppe Arbia

    2015-11-01

    Full Text Available Geostatistical spatial models are widely used in many applied fields to forecast data observed on continuous three-dimensional surfaces. We propose to extend their use to finance and, in particular, to forecasting yield curves. We present the results of an empirical application where we apply the proposed method to forecast Euro Zero Rates (2003–2014 using the Ordinary Kriging method based on the anisotropic variogram. Furthermore, a comparison with other recent methods for forecasting yield curves is proposed. The results show that the model is characterized by good levels of predictions’ accuracy and it is competitive with the other forecasting models considered.

  3. Contextuality under weak assumptions

    International Nuclear Information System (INIS)

    Simmons, Andrew W; Rudolph, Terry; Wallman, Joel J; Pashayan, Hakop; Bartlett, Stephen D

    2017-01-01

    The presence of contextuality in quantum theory was first highlighted by Bell, Kochen and Specker, who discovered that for quantum systems of three or more dimensions, measurements could not be viewed as deterministically revealing pre-existing properties of the system. More precisely, no model can assign deterministic outcomes to the projectors of a quantum measurement in a way that depends only on the projector and not the context (the full set of projectors) in which it appeared, despite the fact that the Born rule probabilities associated with projectors are independent of the context. A more general, operational definition of contextuality introduced by Spekkens, which we will term ‘probabilistic contextuality’, drops the assumption of determinism and allows for operations other than measurements to be considered contextual. Even two-dimensional quantum mechanics can be shown to be contextual under this generalised notion. Probabilistic noncontextuality represents the postulate that elements of an operational theory that cannot be distinguished from each other based on the statistics of arbitrarily many repeated experiments (they give rise to the same operational probabilities) are ontologically identical. In this paper, we introduce a framework that enables us to distinguish between different noncontextuality assumptions in terms of the relationships between the ontological representations of objects in the theory given a certain relation between their operational representations. This framework can be used to motivate and define a ‘possibilistic’ analogue, encapsulating the idea that elements of an operational theory that cannot be unambiguously distinguished operationally can also not be unambiguously distinguished ontologically. We then prove that possibilistic noncontextuality is equivalent to an alternative notion of noncontextuality proposed by Hardy. Finally, we demonstrate that these weaker noncontextuality assumptions are sufficient to prove

  4. Entanglement-fidelity relations for inaccurate ancilla-driven quantum computation

    International Nuclear Information System (INIS)

    Morimae, Tomoyuki; Kahn, Jonas

    2010-01-01

    It was shown by T. Morimae [Phys. Rev. A 81, 060307(R) (2010)] that the gate fidelity of an inaccurate one-way quantum computation is upper bounded by a decreasing function of the amount of entanglement in the register. This means that a strong entanglement causes the low gate fidelity in the one-way quantum computation with inaccurate measurements. In this paper, we derive similar entanglement-fidelity relations for the inaccurate ancilla-driven quantum computation. These relations again imply that a strong entanglement in the register causes the low gate fidelity in the ancilla-driven quantum computation if the measurements on the ancilla are inaccurate.

  5. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based......The geomagnetic field varies on a variety of time- and length scales, which are only rudimentary considered in most present field models. The part of the observed field that can not be explained by a given model, the model residuals, is often considered as an estimate of the data uncertainty (which...... on 5 years of Ørsted and CHAMP data, and includes secular variation and acceleration, as well as low-degree external (magnetospheric) and induced fields. The analysis is done in order to find the statistical behaviour of the space-time structure of the residuals, as a proxy for the data covariances...

  6. Geostatistical evaluation of travel time uncertainties

    International Nuclear Information System (INIS)

    Devary, J.L.

    1983-08-01

    Data on potentiometric head and hydraulic conductivity, gathered from the Wolfcamp Formation of the Permian System, have exhibited tremendous spatial variability as a result of heterogeneities in the media and the presence of petroleum and natural gas deposits. Geostatistical data analysis and error propagation techniques (kriging and conditional simulation) were applied to determine the effect of potentiometric head uncertainties on radionuclide travel paths and travel times through the Wolfcamp Formation. Blok-average kriging was utilized to remove measurement error from potentiometric head data. The travel time calculations have been enhanced by the use of an inverse technique to determine the relative hydraulic conductivity along travel paths. In this way, the spatial variability of the hydraulic conductivity corresponding to streamline convergence and divergence may be included in the analysis. 22 references, 11 figures, 1 table

  7. Geostatistical coal quality control in longwall mining

    Energy Technology Data Exchange (ETDEWEB)

    Hindistan, Mehmet Ali; Tercan, Abdullah Erhan; Uenver, Bahtiyar [Hacettepe University, Dept. of Mining Engineering, Beytepe, 06800 Ankara (Turkey)

    2010-03-01

    The coal quality is an important aspect of coal mine planning. This paper presents a case study in which an underground coal mine is faced with severe penalty cost because it does not consider in situ coal quality control at all. To help short term planning of the coal production the mean calorific values of the blocks inside the production panels are estimated by kriging. The estimated calorific values are compared with those obtained from actual production. The ratio of the calorific values of actual production to estimated values is found to be 0.73 in average due to adverse effect of dilution on the quality of run-of-mine coal. This study reveals the importance of geostatistical block modelling in short term mine planning. (author)

  8. Linking assumptions in amblyopia

    Science.gov (United States)

    LEVI, DENNIS M.

    2017-01-01

    Over the last 35 years or so, there has been substantial progress in revealing and characterizing the many interesting and sometimes mysterious sensory abnormalities that accompany amblyopia. A goal of many of the studies has been to try to make the link between the sensory losses and the underlying neural losses, resulting in several hypotheses about the site, nature, and cause of amblyopia. This article reviews some of these hypotheses, and the assumptions that link the sensory losses to specific physiological alterations in the brain. Despite intensive study, it turns out to be quite difficult to make a simple linking hypothesis, at least at the level of single neurons, and the locus of the sensory loss remains elusive. It is now clear that the simplest notion—that reduced contrast sensitivity of neurons in cortical area V1 explains the reduction in contrast sensitivity—is too simplistic. Considerations of noise, noise correlations, pooling, and the weighting of information also play a critically important role in making perceptual decisions, and our current models of amblyopia do not adequately take these into account. Indeed, although the reduction of contrast sensitivity is generally considered to reflect “early” neural changes, it seems plausible that it reflects changes at many stages of visual processing. PMID:23879956

  9. Testing Our Fundamental Assumptions

    Science.gov (United States)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  10. Optimizing Groundwater Monitoring Networks Using Integrated Statistical and Geostatistical Approaches

    Directory of Open Access Journals (Sweden)

    Jay Krishna Thakur

    2015-08-01

    Full Text Available The aim of this work is to investigate new approaches using methods based on statistics and geo-statistics for spatio-temporal optimization of groundwater monitoring networks. The formulated and integrated methods were tested with the groundwater quality data set of Bitterfeld/Wolfen, Germany. Spatially, the monitoring network was optimized using geo-statistical methods. Temporal optimization of the monitoring network was carried out using Sen’s method (1968. For geostatistical network optimization, a geostatistical spatio-temporal algorithm was used to identify redundant wells in 2- and 2.5-D Quaternary and Tertiary aquifers. Influences of interpolation block width, dimension, contaminant association, groundwater flow direction and aquifer homogeneity on statistical and geostatistical methods for monitoring network optimization were analysed. The integrated approach shows 37% and 28% redundancies in the monitoring network in Quaternary aquifer and Tertiary aquifer respectively. The geostatistical method also recommends 41 and 22 new monitoring wells in the Quaternary and Tertiary aquifers respectively. In temporal optimization, an overall optimized sampling interval was recommended in terms of lower quartile (238 days, median quartile (317 days and upper quartile (401 days in the research area of Bitterfeld/Wolfen. Demonstrated methods for improving groundwater monitoring network can be used in real monitoring network optimization with due consideration given to influencing factors.

  11. The role of geostatistics in medical geology

    Science.gov (United States)

    Goovaerts, Pierre

    2014-05-01

    Since its development in the mining industry, geostatistics has emerged as the primary tool for spatial data analysis in various fields, ranging from earth and atmospheric sciences, to agriculture, soil science, remote sensing, and more recently environmental exposure assessment. In the last few years, these tools have been tailored to the field of medical geography or spatial epidemiology, which is concerned with the study of spatial patterns of disease incidence and mortality and the identification of potential 'causes' of disease, such as environmental exposure, diet and unhealthy behaviors, economic or socio-demographic factors. On the other hand, medical geology is an emerging interdisciplinary scientific field studying the relationship between natural geological factors and their effects on human and animal health. This paper provides an introduction to the field of medical geology with an overview of geostatistical methods available for the analysis of geological and health data. Key concepts are illustrated using the mapping of groundwater arsenic concentrations across eleven Michigan counties and the exploration of its relationship to the incidence of prostate cancer at the township level. Arsenic in drinking-water is a major problem and has received much attention because of the large human population exposed and the extremely high concentrations (e.g. 600 to 700 μg/L) recorded in many instances. Few studies have however assessed the risks associated with exposure to low levels of arsenic (say < 50 μg/L) most commonly found in drinking water in the United States. In the Michigan thumb region, arsenopyrite (up to 7% As by weight) has been identified in the bedrock of the Marshall Sandstone aquifer, one of the region's most productive aquifers. Epidemiologic studies have suggested a possible associationbetween exposure to inorganic arsenic and prostate cancer mortality, including a study of populations residing in Utah. The information available for the

  12. Efficient geostatistical inversion of transient groundwater flow using preconditioned nonlinear conjugate gradients

    Science.gov (United States)

    Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf

    2017-04-01

    In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.

  13. Association between inaccurate estimation of body size and obesity in schoolchildren.

    Science.gov (United States)

    Costa, Larissa da Cunha Feio; Silva, Diego Augusto Santos; Almeida, Sebastião de Sousa; de Vasconcelos, Francisco de Assis Guedes

    2015-01-01

    To investigate the prevalence of inaccurate estimation of own body size among Brazilian schoolchildren of both sexes aged 7-10 years, and to test whether overweight/obesity; excess body fat and central obesity are associated with inaccuracy. Accuracy of body size estimation was assessed using the Figure Rating Scale for Brazilian Children. Multinomial logistic regression was used to analyze associations. The overall prevalence of inaccurate body size estimation was 76%, with 34% of the children underestimating their body size and 42% overestimating their body size. Obesity measured by body mass index was associated with underestimation of body size in both sexes, while central obesity was only associated with overestimation of body size among girls. The results of this study suggest there is a high prevalence of inaccurate body size estimation and that inaccurate estimation is associated with obesity. Accurate estimation of own body size is important among obese schoolchildren because it may be the first step towards adopting healthy lifestyle behaviors.

  14. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    Science.gov (United States)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  15. Teaching the Pursuit of Assumptions

    Science.gov (United States)

    Gardner, Peter; Johnson, Stephen

    2015-01-01

    Within the school of thought known as Critical Thinking, identifying or finding missing assumptions is viewed as one of the principal thinking skills. Within the new subject in schools and colleges, usually called Critical Thinking, the skill of finding missing assumptions is similarly prominent, as it is in that subject's public examinations. In…

  16. Geostatistical inference using crosshole ground-penetrating radar

    DEFF Research Database (Denmark)

    Looms, Majken C; Hansen, Thomas Mejer; Cordua, Knud Skou

    2010-01-01

    , the moisture content will reflect the variation of the physical properties of the subsurface, which determine the flow patterns in the unsaturated zone. Deterministic least-squares inversion of crosshole groundpenetrating-radar GPR traveltimes result in smooth, minimumvariance estimates of the subsurface radar...... wave velocity structure, which may diminish the utility of these images for geostatistical inference. We have used a linearized stochastic inversion technique to infer the geostatistical properties of the subsurface radar wave velocity distribution using crosshole GPR traveltimes directly. Expanding...

  17. GEOSTATISTICAL SOLUTIONS FOR DOWNSCALING REMOTELY SENSED LAND SURFACE TEMPERATURE

    Directory of Open Access Journals (Sweden)

    Q. Wang

    2017-09-01

    Full Text Available Remotely sensed land surface temperature (LST downscaling is an important issue in remote sensing. Geostatistical methods have shown their applicability in downscaling multi/hyperspectral images. In this paper, four geostatistical solutions, including regression kriging (RK, downscaling cokriging (DSCK, kriging with external drift (KED and area-to-point regression kriging (ATPRK, are applied for downscaling remotely sensed LST. Their differences are analyzed theoretically and the performances are compared experimentally using a Landsat 7 ETM+ dataset. They are also compared to the classical TsHARP method.

  18. Do unreal assumptions pervert behaviour?

    DEFF Research Database (Denmark)

    Petersen, Verner C.

    After conducting a series of experiments involving economics students Miller concludes: "The experience of taking a course in microeconomics actually altered students' conceptions of the appropriateness of acting in a self-interested manner, not merely their definition of self-interest." Being...... become taken for granted and tacitly included into theories and models of management. Guiding business and manage¬ment to behave in a fashion that apparently makes these assumptions become "true". Thus in fact making theories and models become self-fulfilling prophecies. The paper elucidates some...... of the basic assumptions underlying the theories found in economics. Assumptions relating to the primacy of self-interest, to resourceful, evaluative, maximising models of man, to incentive systems and to agency theory. The major part of the paper then discusses how these assumptions and theories may pervert...

  19. geostatistical prediction of future volcanic eruption and risk ...

    African Journals Online (AJOL)

    Admin

    within the limits of any reasonable doubt that the interval for the next major eruption that will emit lava with a volume of ..... P e rc e n ta g e. S u rv iv o r. Fig. 11: Log empirical Survivor factor for Mount Cameroon. Volcano. GEOSTATISTICAL PREDICTION OF FUTURE VOLCANIC ..... Cox, D., and Lewis, P. A. W., 1966.

  20. Geostatistics for radiological characterization: overview and application cases

    International Nuclear Information System (INIS)

    Desnoyers, Yvon

    2016-01-01

    The objective of radiological characterization is to find a suitable balance between gathering data (constrained by cost, deadlines, accessibility or radiation) and managing the issues (waste volumes, levels of activity or exposure). It is necessary to have enough information to have confidence in the results without multiplying useless data. Geo-statistics processing of data considers all available pieces of information: historical data, non-destructive measurements and laboratory analyses of samples. The spatial structure modelling is then used to produce maps and to estimate the extent of radioactive contamination (surface and depth). Quantifications of local and global uncertainties are powerful decision-making tools for better management of remediation projects at contaminated sites, and for decontamination and dismantling projects at nuclear facilities. They can be used to identify hot spots, estimate contamination of surfaces and volumes, classify radioactive waste according to thresholds, estimate source terms, and so on. The spatial structure of radioactive contamination makes the optimization of sampling (number and position of data points) particularly important. Geo-statistics methodology can help determine the initial mesh size and reduce estimation uncertainties. Several show cases are presented to illustrate why and how geo-statistics can be applied to a range of radiological characterization where investigated units can represent very small areas (a few m 2 or a few m 3 ) or very large sites (at a country scale). The focus is then put on experience gained over years in the use of geo-statistics and sampling optimization. (author)

  1. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  2. Reducing uncertainty in geostatistical description with well testing pressure data

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A.C.; He, Nanqun [Univ. of Tulsa, OK (United States); Oliver, D.S. [Chevron Petroleum Technology Company, La Habra, CA (United States)

    1997-08-01

    Geostatistics has proven to be an effective tool for generating realizations of reservoir properties conditioned to static data, e.g., core and log data and geologic knowledge. Due to the lack of closely spaced data in the lateral directions, there will be significant variability in reservoir descriptions generated by geostatistical simulation, i.e., significant uncertainty in the reservoir descriptions. In past work, we have presented procedures based on inverse problem theory for generating reservoir descriptions (rock property fields) conditioned to pressure data and geostatistical information represented as prior means for log-permeability and porosity and variograms. Although we have shown that the incorporation of pressure data reduces the uncertainty below the level contained in the geostatistical model based only on static information (the prior model), our previous results assumed did not explicitly account for uncertainties in the prior means and the parameters defining the variogram model. In this work, we investigate how pressure data can help detect errors in the prior means. If errors in the prior means are large and are not taken into account, realizations conditioned to pressure data represent incorrect samples of the a posteriori probability density function for the rock property fields, whereas, if the uncertainty in the prior mean is incorporated properly into the model, one obtains realistic realizations of the rock property fields.

  3. Study of the plywood panels properties using geostatistic

    Directory of Open Access Journals (Sweden)

    Cleverson Pinheiro

    2016-12-01

    Full Text Available Plywood panels have multiple applications in construction, in the furniture industry and packaging. There is a need to improve techniques for assessing quality of these products. This paper proposed the use of geostatistics to evaluate the behavior of the of the plywood panel. The physical properties were analyzed (moisture content, density and water absorption in the full extent of the plywood panel of Eucalyptus sp., bonded with adhesive single-component polyurethane. For analysis, three panels of five layers, with dimensions of 350 x 350 x 15.5 mm each, were employed. The tests were based on the standards EN 323-2000, EN 322-2000 and ABNT NBR 9486- 2011. Statistical modeling was performed using the R software, using the methodology of geostatistics. The average results were compared with values reported in the literature. The average water absorption was 7% higher in comparison with other studies, which used urea formaldehyde. So, the product can be applied only to interior and their use is not advisable for floors; the average moisture content and density were within the average values found in the literature. The behavior of the properties analyzed using the geostatistical model was not homogeneous, with large variations. Geostatistics was considered an appropriate tool for the study of the variability of the plywood panel properties, and can be applied for better quality control of them.

  4. Constrained optimisation of spatial sampling : a geostatistical approach

    NARCIS (Netherlands)

    Groenigen, van J.W.

    1999-01-01

    Aims

    This thesis aims at the development of optimal sampling strategies for geostatistical studies. Special emphasis is on the optimal use of ancillary data, such as co-related imagery, preliminary observations and historic knowledge. Although the object of all studies

  5. Estimating Rainfall in Rodrigues by Geostatistics: (A) Theory | Proag ...

    African Journals Online (AJOL)

    This paper introduces the geostatistical method. Originally devised to treat problems that arise when conventional statistical theory is used in estimating changes in ore grade within a mine, it is, however, an abstract theory of statistical behaviour that is applicable to many circumstances in different areas of geology and other ...

  6. The Axioms and Special Assumptions

    Science.gov (United States)

    Borchers, Hans-Jürgen; Sen, Rathindra Nath

    For ease of reference, the axioms, the nontriviality assumptions (3.1.10), the definition of a D-set and the special assumptions of Chaps. 5 and 6 are collected together in the following. The verbal explanations that follow the formal definitions a)-f) of (4.2.1) have been omitted. The entries below are numbered as they are in the text. Recall that βC is the subset of the cone C which, in a D-set, is seen to coincide with the boundary of C after the topology is introduced (Sects. 3.2 and 3.2.1).

  7. Challenged assumptions and invisible effects

    DEFF Research Database (Denmark)

    Wimmelmann, Camilla Lawaetz; Vitus, Kathrine; Jervelund, Signe Smith

    2017-01-01

    of two complete intervention courses and an analysis of the official intervention documents. Findings – This case study exemplifies how the basic normative assumptions behind an immigrant-oriented intervention and the intrinsic power relations therein may be challenged and negotiated by the participants...

  8. Portfolios: Assumptions, Tensions, and Possibilities.

    Science.gov (United States)

    Tierney, Robert J.; Clark, Caroline; Fenner, Linda; Herter, Roberta J.; Simpson, Carolyn Staunton; Wiser, Bert

    1998-01-01

    Presents a discussion between two educators of the history, assumptions, tensions, and possibilities surrounding the use of portfolios in multiple classroom contexts. Includes illustrative commentaries that offer alternative perspectives from a range of other educators with differing backgrounds and interests in portfolios. (RS)

  9. Sampling Assumptions in Inductive Generalization

    Science.gov (United States)

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  10. Accurate and inaccurate HIV transmission beliefs, stigmatizing and HIV protection motivation in northern Thailand

    NARCIS (Netherlands)

    Boer, H.; Emons, P.A.A.

    2004-01-01

    We assessed the relation between accurate beliefs about HIV transmission and inaccurate beliefs about HIV transmission and emotional reactions to people with AIDS (PWA) and AIDS risk groups, stigmatizing attitudes and motivation to protect from HIV. In Chiang Rai, northern Thailand, 219 respondents

  11. A non-convex variational approach to photometric stereo under inaccurate lighting

    DEFF Research Database (Denmark)

    Quéau, Yvain; Wu, Tao; Lauze, Francois Bernard

    2017-01-01

    This paper tackles the photometric stereo problem in the presence of inaccurate lighting, obtained either by calibration or by an uncalibrated photometric stereo method. Based on a precise modeling of noise and outliers, a robust variational approach is introduced. It explicitly accounts for self...

  12. Leg mass characteristics of accurate and inaccurate kickers--an Australian football perspective.

    Science.gov (United States)

    Hart, Nicolas H; Nimphius, Sophia; Cochrane, Jodie L; Newton, Robert U

    2013-01-01

    Athletic profiling provides valuable information to sport scientists, assisting in the optimal design of strength and conditioning programmes. Understanding the influence these physical characteristics may have on the generation of kicking accuracy is advantageous. The aim of this study was to profile and compare the lower limb mass characteristics of accurate and inaccurate Australian footballers. Thirty-one players were recruited from the Western Australian Football League to perform ten drop punt kicks over 20 metres to a player target. Players were separated into accurate (n = 15) and inaccurate (n = 16) groups, with leg mass characteristics assessed using whole body dual energy x-ray absorptiometry (DXA) scans. Accurate kickers demonstrated significantly greater relative lean mass (P ≤ 0.004) and significantly lower relative fat mass (P ≤ 0.024) across all segments of the kicking and support limbs, while also exhibiting significantly higher intra-limb lean-to-fat mass ratios for all segments across both limbs (P ≤ 0.009). Inaccurate kickers also produced significantly larger asymmetries between limbs than accurate kickers (P ≤ 0.028), showing considerably lower lean mass in their support leg. These results illustrate a difference in leg mass characteristics between accurate and inaccurate kickers, highlighting the potential influence these may have on technical proficiency of the drop punt.

  13. Inaccurate Citations in Biomedical Journalism: Effect on the Impact Factor of the American Journal of Roentgenology.

    Science.gov (United States)

    Karabulut, Nevzat

    2017-03-01

    The aim of this study is to investigate the frequency of incorrect citations and its effects on the impact factor of a specific biomedical journal: the American Journal of Roentgenology. The Cited Reference Search function of Thomson Reuters' Web of Science database (formerly the Institute for Scientific Information's Web of Knowledge database) was used to identify erroneous citations. This was done by entering the journal name into the Cited Work field and entering "2011-2012" into the Cited Year(s) field. The errors in any part of the inaccurately cited references (e.g., author names, title, year, volume, issue, and page numbers) were recorded, and the types of errors (i.e., absent, deficient, or mistyped) were analyzed. Erroneous citations were corrected using the Suggest a Correction function of the Web of Science database. The effect of inaccurate citations on the impact factor of the AJR was calculated. Overall, 183 of 1055 citable articles published in 2011-2012 were inaccurately cited 423 times (mean [± SD], 2.31 ± 4.67 times; range, 1-44 times). Of these 183 articles, 110 (60.1%) were web-only articles and 44 (24.0%) were print articles. The most commonly identified errors were page number errors (44.8%) and misspelling of an author's name (20.2%). Incorrect citations adversely affected the impact factor of the AJR by 0.065 in 2012 and by 0.123 in 2013. Inaccurate citations are not infrequent in biomedical journals, yet they can be detected and corrected using the Web of Science database. Although the accuracy of references is primarily the responsibility of authors, the journal editorial office should also define a periodic inaccurate citation check task and correct erroneous citations to reclaim unnecessarily lost credit.

  14. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia

    2014-01-01

    We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear......, multi-step forward model (rock physics and seismology) and to provide realistic estimates of uncertainties. To generate realistic models which represent samples of the prior distribution, and to overcome the high computational demand, we reduce the search space utilizing an algorithm drawn from...... geostatistics. The geostatistical algorithm learns the multiple-point statistics from prototype models, then generates proposal models which are tested by a Metropolis sampler. The solution of the inverse problem is finally represented by a collection of reservoir models in terms of facies and porosity, which...

  15. 4th European Conference on Geostatistics for Environmental Applications

    CERN Document Server

    Carrera, Jesus; Gómez-Hernández, José

    2004-01-01

    The fourth edition of the European Conference on Geostatistics for Environmental Applications (geoENV IV) took place in Barcelona, November 27-29, 2002. As a proof that there is an increasing interest in environmental issues in the geostatistical community, the conference attracted over 100 participants, mostly Europeans (up to 10 European countries were represented), but also from other countries in the world. Only 46 contributions, selected out of around 100 submitted papers, were invited to be presented orally during the conference. Additionally 30 authors were invited to present their work in poster format during a special session. All oral and poster contributors were invited to submit their work to be considered for publication in this Kluwer series. All papers underwent a reviewing process, which consisted on two reviewers for oral presentations and one reviewer for posters. The book opens with one keynote paper by Philippe Naveau. It is followed by 40 papers that correspond to those presented orally d...

  16. 3D vadose zone modeling using geostatistical inferences

    International Nuclear Information System (INIS)

    Knutson, C.F.; Lee, C.B.

    1991-01-01

    In developing a 3D model of the 600 ft thick interbedded basalt and sediment complex that constitutes the vadose zone at the Radioactive Waste Management Complex (RWMC) at the Idaho National Engineering Laboratory (INEL) geostatistical data were captured for 12--15 parameters (e.g. permeability, porosity, saturation, etc. and flow height, flow width, flow internal zonation, etc.). This two scale data set was generated from studies of subsurface core and geophysical log suites at RWMC and from surface outcrop exposures located at the Box Canyon of the Big Lost River and from Hell's Half Acre lava field all located in the general RWMC area. Based on these currently available data, it is possible to build a 3D stochastic model that utilizes: cumulative distribution functions obtained from the geostatistical data; backstripping and rebuilding of stratigraphic units; an ''expert'' system that incorporates rules based on expert geologic analysis and experimentally derived geostatistics for providing: (a) a structural and isopach map of each layer, (b) a realization of the flow geometry of each basalt flow unit, and (c) a realization of the internal flow parameters (eg permeability, porosity, and saturation) for each flow. 10 refs., 4 figs., 1 tab

  17. 2nd European Conference on Geostatistics for Environmental Applications

    CERN Document Server

    Soares, Amílcar; Froidevaux, Roland

    1999-01-01

    The Second European Conference on Geostatistics for Environmental Ap­ plications took place in Valencia, November 18-20, 1998. Two years have past from the first meeting in Lisbon and the geostatistical community has kept active in the environmental field. In these days of congress inflation, we feel that continuity can only be achieved by ensuring quality in the papers. For this reason, all papers in the book have been reviewed by, at least, two referees, and care has been taken to ensure that the reviewer comments have been incorporated in the final version of the manuscript. We are thankful to the members of the scientific committee for their timely review of the scripts. All in all, there are three keynote papers from experts in soil science, climatology and ecology and 43 contributed papers providing a good indication of the status of geostatistics as applied in the environ­ mental field all over the world. We feel now confident that the geoENV conference series, seeded around a coffee table almost six...

  18. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption...

  19. Geostatistical methodology for waste optimization of contaminated premises - 59344

    International Nuclear Information System (INIS)

    Desnoyers, Yvon; Dubot, Didier

    2012-01-01

    The presented methodological study illustrates a Geo-statistical approach suitable for radiological evaluation in nuclear premises. The waste characterization is mainly focused on floor concrete surfaces. By modeling the spatial continuity of activities, Geo-statistics provide sound methods to estimate and map radiological activities, together with their uncertainty. The multivariate approach allows the integration of numerous surface radiation measurements in order to improve the estimation of activity levels from concrete samples. This way, a sequential and iterative investigation strategy proves to be relevant to fulfill the different evaluation objectives. Waste characterization is performed on risk maps rather than on direct interpolation maps (due to bias of the selection on kriging results). The use of several estimation supports (punctual, 1 m 2 , room) allows a relevant radiological waste categorization thanks to cost-benefit analysis according to the risk of exceeding a given activity threshold. Global results, mainly total activity, are similarly quantified to precociously lead the waste management for the dismantling and decommissioning project. This paper recalled the geo-statistics principles and demonstrated how this methodology provides innovative tools for the radiological evaluation of contaminated premises. The relevance of this approach relies on the presence of a spatial continuity for radiological contamination. In this case, geo-statistics provides reliable activity estimates, uncertainty quantification and risk analysis, which are essential decision-making tools for decommissioning and dismantling projects of nuclear installations. Waste characterization is then performed taking all relevant information into account: historical knowledge, surface measurements and samples. Thanks to the multivariate processing, the different investigation stages can be rationalized as regards quantity and positioning. Waste characterization is finally

  20. Geospatial Interpolation and Mapping of Tropospheric Ozone Pollution Using Geostatistics

    Directory of Open Access Journals (Sweden)

    Swatantra R. Kethireddy

    2014-01-01

    Full Text Available Tropospheric ozone (O3 pollution is a major problem worldwide, including in the United States of America (USA, particularly during the summer months. Ozone oxidative capacity and its impact on human health have attracted the attention of the scientific community. In the USA, sparse spatial observations for O3 may not provide a reliable source of data over a geo-environmental region. Geostatistical Analyst in ArcGIS has the capability to interpolate values in unmonitored geo-spaces of interest. In this study of eastern Texas O3 pollution, hourly episodes for spring and summer 2012 were selectively identified. To visualize the O3 distribution, geostatistical techniques were employed in ArcMap. Using ordinary Kriging, geostatistical layers of O3 for all the studied hours were predicted and mapped at a spatial resolution of 1 kilometer. A decent level of prediction accuracy was achieved and was confirmed from cross-validation results. The mean prediction error was close to 0, the root mean-standardized-prediction error was close to 1, and the root mean square and average standard errors were small. O3 pollution map data can be further used in analysis and modeling studies. Kriging results and O3 decadal trends indicate that the populace in Houston-Sugar Land-Baytown, Dallas-Fort Worth-Arlington, Beaumont-Port Arthur, San Antonio, and Longview are repeatedly exposed to high levels of O3-related pollution, and are prone to the corresponding respiratory and cardiovascular health effects. Optimization of the monitoring network proves to be an added advantage for the accurate prediction of exposure levels.

  1. Risk Assessment of Sediment Pollution Using Geostatistical Simulations

    Science.gov (United States)

    Golay, J.; Kanevski, M.

    2012-04-01

    Environmental monitoring networks (EMN) discreetly measure the intensities of continuous phenomena (e.g. pollution, temperature, etc.). Spatial prediction models, like kriging, are then used for modeling. But, they give rise to smooth representations of phenomena which leads to overestimations or underestimations of extreme values. Moreover, they do not reproduce the spatial variability of the original data and the corresponding uncertainties. When dealing with risk assessment, this is unacceptable, since extreme values must be retrieved and probabilities of exceeding given thresholds must be computed [Kanevski et al., 2009]. In order to overcome these obstacles, geostatistics provides another approach: conditional stochastic simulations. Here, the basic idea is to generate multiple estimates of variable values (e.g. pollution concentration) at every location of interest which are calculated as stochastic realizations of an unknown random function (see, for example, [Kanevski, 2008], where both theoretical concepts and real data case studies are presented in detail). Many algorithms implement this approach. The most widely used in spatial modeling are sequential Gaussian simulations/cosimulations, sequential indicator simulations/cosimulations and direct simulations. In the present study, several algorithms of geostatistical conditional simulations were applied on real data collected from Lake Geneva. The main objectives were to compare their effectiveness in reproducing global statistics (histograms, variograms) and the way they characterize the variability and uncertainty of the contamination patterns. The dataset is composed of 200 measurements of the contamination of the lake sediments by heavy metals (i.e. Cadmium, Mercury, Zinc, Copper, Titanium and Chromium). The results obtained show some differences highlighting that risk assessment can be influenced by the algorithm it relies on. Moreover, hybrid models based on machine learning algorithms and

  2. Geostatistical ore reserve estimation for a roll-front type uranium deposit (practitioner's guide)

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1977-01-01

    This report comprises two parts. Part I contains illustrative examples of each phase of a geostatistical study using a roll-front type uranium deposit. Part II contains five computer programs and comprehensive users' manuals for these programs which are necessary to make a practical geostatistical study

  3. Mercury emissions from coal combustion in Silesia, analysis using geostatistics

    Science.gov (United States)

    Zasina, Damian; Zawadzki, Jaroslaw

    2015-04-01

    Data provided by the UNEP's report on mercury [1] shows that solid fuel combustion in significant source of mercury emission to air. Silesia, located in southwestern Poland, is notably affected by mercury emission due to being one of the most industrialized Polish regions: the place of coal mining, production of metals, stone mining, mineral quarrying and chemical industry. Moreover, Silesia is the region with high population density. People are exposed to severe risk of mercury emitted from both: industrial and domestic sources (i.e. small household furnaces). Small sources have significant contribution to total emission of mercury. Official and statistical analysis, including prepared for international purposes [2] did not provide data about spatial distribution of the mercury emitted to air, however number of analysis on Polish public power and energy sector had been prepared so far [3; 4]. The distribution of locations exposed for mercury emission from small domestic sources is interesting matter merging information from various sources: statistical, economical and environmental. This paper presents geostatistical approach to distibution of mercury emission from coal combustion. Analysed data organized in 2 independent levels: individual, bottom-up approach derived from national emission reporting system [5; 6] and top down - regional data calculated basing on official statistics [7]. Analysis, that will be presented, will include comparison of spatial distributions of mercury emission using data derived from sources mentioned above. Investigation will include three voivodeships of Poland: Lower Silesian, Opole (voivodeship) and Silesian using selected geostatistical methodologies including ordinary kriging [8]. References [1] UNEP. Global Mercury Assessment 2013: Sources, Emissions, Releases and Environmental Transport. UNEP Chemicals Branch, Geneva, Switzerland, 2013. [2] NCEM. Poland's Informative Inventory Report 2014. NCEM at the IEP-NRI, 2014. http

  4. Data analysis for radiological characterisation: Geostatistical and statistical complementarity

    International Nuclear Information System (INIS)

    Desnoyers, Yvon; Dubot, Didier

    2012-01-01

    Radiological characterisation may cover a large range of evaluation objectives during a decommissioning and dismantling (D and D) project: removal of doubt, delineation of contaminated materials, monitoring of the decontamination work and final survey. At each stage, collecting relevant data to be able to draw the conclusions needed is quite a big challenge. In particular two radiological characterisation stages require an advanced sampling process and data analysis, namely the initial categorization and optimisation of the materials to be removed and the final survey to demonstrate compliance with clearance levels. On the one hand the latter is widely used and well developed in national guides and norms, using random sampling designs and statistical data analysis. On the other hand a more complex evaluation methodology has to be implemented for the initial radiological characterisation, both for sampling design and for data analysis. The geostatistical framework is an efficient way to satisfy the radiological characterisation requirements providing a sound decision-making approach for the decommissioning and dismantling of nuclear premises. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. Thus geo-statistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, leading to a sound classification of radiological waste (surfaces and volumes). This way, the radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive or not) surface survey of the contamination is implemented on a regular grid. Finally, in order to assess activity levels and contamination depths, destructive samples are collected at several locations within the premises (based on the surface survey results) and analysed. Combined with

  5. Inaccurate preoperative imaging assessment on biliary anatomy not increases biliary complications after living donor liver transplantation

    International Nuclear Information System (INIS)

    Xu Xiao; Wei Xuyong; Ling Qi; Wang Kai; Bao Haiwei; Xie Haiyang; Zhou Lin; Zheng Shusen

    2012-01-01

    Backgrounds and aims: Accurate assessment of graft bile duct is important to plan surgical procedure. Magnetic resonance cholangiopancreatography (MRCP) has become an important diagnostic procedure in evaluation of pancreaticobiliary ductal abnormalities and has been reported as highly accurate. We aim to estimate the efficacy of preoperative MRCP on depicting biliary anatomy in living donor liver transplantation (LDLT), and to determine whether inaccurate preoperative imaging assessment would increase the biliary complications after LDLT. Methods: The data of 118 cases LDLT were recorded. Information from preoperative MRCP was assessed using intraoperative cholangiography (IOC) as the gold standard. The possible risk factors of recipient biliary complications were analyzed. Results: Of 118 donors, 84 had normal anatomy (type A) and 34 had anatomic variants (19 cases of type B, 9 cases of type C, 1 case of type E, 2 cases of type F and 3 cases of type I) confirmed by IOC. MRCP correctly predicted all 84 normal cases and 17 of 34 variant cases, and showed an accuracy of 85.6% (101/118). The incidence of biliary complications was comparable between cases with accurate and inaccurate classification of biliary tree from MRCP, and between cases with normal and variant anatomy of bile duct. While cases with graft duct opening ≤5 mm showed a significant higher incidence of total biliary complications (21.1% vs. 6.6%, P = 0.028) and biliary stricture (10.5% vs. 1.6%, P = 0.041) compared with cases with large duct opening >5 mm. Conclusion: MRCP could correctly predict normal but not variant biliary anatomy. Inaccurate assessment of biliary anatomy from MRCP not increases the rate of biliary complications, while small-sized graft duct may cause an increase in biliary complications particularly biliary stricture after LDLT.

  6. The Impact of Inaccurate Internet Health Information in a Secondary School Learning Environment

    Science.gov (United States)

    Edwards, Christine; Richards-Kortum, Rebecca

    2008-01-01

    Background Patients in the United States commonly use the Internet to acquire health information. While a significant amount of health-related information is available on the Internet, the accuracy of this information is highly variable. Objectives The objective of the study was to determine how effectively students can assess the accuracy of Internet-based material when gathering information on a controversial medical topic using simple keyword searches. Methods A group of 34 students from the science magnet high school in Houston, Texas searched for the terms “vaccine safety” and “vaccine danger” using Google and then answered questions regarding the accuracy of the health information on the returned sites. The students were also asked to describe the lessons they learned in the exercise and to answer questions regarding the strength of evidence for seven statements regarding vaccinations. Because of the surprising revelation that the majority of students left the exercise with inaccurate information concerning the safety and efficacy of vaccines, these same students participated in a follow-up study in which a fact-based vaccine video was shown, after which the assessment of student knowledge was repeated. Results Of the 34 participants, 20 (59%) thought that the Internet sites were accurate on the whole, even though over half of the links (22 out of 40, 55%) that the students viewed were, in fact, inaccurate on the whole. A high percentage of the students left the first exercise with significant misconceptions about vaccines; 18 of the 34 participants (53%) reported inaccurate statements about vaccines in the lessons they learned. Of the 41 verifiable facts about vaccines that were reported by participants in their lessons-learned statement, 24 of those facts (59%) were incorrect. Following presentation of the film, the majority of students left the exercise with correct information about vaccines, based on their lessons-learned statement. In this case

  7. Near-Nash equilibrium strategies for LQ differential games with inaccurate state information

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available ε -Nash equilibrium or “near equilibrium” for a linear quadratic cost game is considered. Due to inaccurate state information, the standard solution for feedback Nash equilibrium cannot be applied. Instead, an estimation of the players' states is substituted into the optimal control strategies equation obtained for perfect state information. The magnitude of the ε in the ε -Nash equilibrium will depend on the quality of the estimation process. To illustrate this approach, a Luenberger-type observer is used in the numerical example to generate the players' state estimates in a two-player non-zero-sum LQ differential game.

  8. Inverting reflections using full-waveform inversion with inaccurate starting models

    KAUST Repository

    AlTheyab, Abdullah

    2015-08-19

    We present a method for inverting seismic reflections using full-waveform inversion (FWI) with inaccurate starting models. For a layered medium, near-offset reflections (with zero angle of incidence) are unlikely to be cycle-skipped regardless of the low-wavenumber velocity error in the initial models. Therefore, we use them as a starting point for FWI, and the subsurface velocity model is then updated during the FWI iterations using reflection wavepaths from varying offsets that are not cycle-skipped. To enhance low-wavenumber updates and accelerate the convergence, we take several passes through the non-linear Gauss-Seidel iterations, where we invert traces from a narrow range of near offsets and finally end at the far offsets. Every pass is followed by applying smoothing to the cumulative slowness update. The smoothing is strong at the early stages and relaxed at later iterations to allow for a gradual reconstruction of the subsurface model in a multiscale manner. Applications to synthetic and field data, starting from inaccurate models, show significant low-wavenumber updates and flattening of common-image gathers after many iterations.

  9. The Test of Inaccurate Position of Renography Detector to Relative UptakeFigure and Individual Excretion

    International Nuclear Information System (INIS)

    Bambang-Supardiyono; Prayitno

    2000-01-01

    Accuracy of detector position toward the kidney location (preciseposition) in renography will resulting the maximum count figure, detectorposition change from the precise point (inaccurate) will decreasing the countrate. Therefore for it had been simulated the influence of count figure ofright kidney (fixed left kidney count) ± 5 % to ± 20 % to relativeuptake figure and individual excretion. Based on the calculation it was foundthat the relation of detector position ± 0.5 cm to ± 2 cm from theprecise point will have effect to relative uptake figure ± (1.25 % to 5.00%), the fixed individual excretion figure. The change is still can beaccepted because the qualitative information with 10 % accuracy is stillacceptable. (author)

  10. Inaccurate DNA synthesis in cell extracts of yeast producing active human DNA polymerase iota.

    Directory of Open Access Journals (Sweden)

    Alena V Makarova

    2011-01-01

    Full Text Available Mammalian Pol ι has an unusual combination of properties: it is stimulated by Mn(2+ ions, can bypass some DNA lesions and misincorporates "G" opposite template "T" more frequently than incorporates the correct "A." We recently proposed a method of detection of Pol ι activity in animal cell extracts, based on primer extension opposite the template T with a high concentration of only two nucleotides, dGTP and dATP (incorporation of "G" versus "A" method of Gening, abbreviated as "misGvA". We provide unambiguous proof of the "misGvA" approach concept and extend the applicability of the method for the studies of variants of Pol ι in the yeast model system with different cation cofactors. We produced human Pol ι in baker's yeast, which do not have a POLI ortholog. The "misGvA" activity is absent in cell extracts containing an empty vector, or producing catalytically dead Pol ι, or Pol ι lacking exon 2, but is robust in the strain producing wild-type Pol ι or its catalytic core, or protein with the active center L62I mutant. The signature pattern of primer extension products resulting from inaccurate DNA synthesis by extracts of cells producing either Pol ι or human Pol η is different. The DNA sequence of the template is critical for the detection of the infidelity of DNA synthesis attributed to DNA Pol ι. The primer/template and composition of the exogenous DNA precursor pool can be adapted to monitor replication fidelity in cell extracts expressing various error-prone Pols or mutator variants of accurate Pols. Finally, we demonstrate that the mutation rates in yeast strains producing human DNA Pols ι and η are not elevated over the control strain, despite highly inaccurate DNA synthesis by their extracts.

  11. The use of sequential indicator simulation to characterize geostatistical uncertainty

    International Nuclear Information System (INIS)

    Hansen, K.M.

    1992-10-01

    Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It is recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds

  12. Bayesian geostatistics in health cartography: the perspective of malaria.

    Science.gov (United States)

    Patil, Anand P; Gething, Peter W; Piel, Frédéric B; Hay, Simon I

    2011-06-01

    Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision.

  13. Geostatistical modeling of groundwater properties and assessment of their uncertainties

    International Nuclear Information System (INIS)

    Honda, Makoto; Yamamoto, Shinya; Sakurai, Hideyuki; Suzuki, Makoto; Sanada, Hiroyuki; Matsui, Hiroya; Sugita, Yutaka

    2010-01-01

    The distribution of groundwater properties is important for understanding of the deep underground hydrogeological environments. This paper proposes a geostatistical system for modeling the groundwater properties which have a correlation with the ground resistivity data obtained from widespread and exhaustive survey. That is, the methodology for the integration of resistivity data measured by various methods and the methodology for modeling the groundwater properties using the integrated resistivity data has been developed. The proposed system has also been validated using the data obtained in the Horonobe Underground Research Laboratory project. Additionally, the quantification of uncertainties in the estimated model has been tried by numerical simulations based on the data. As a result, the uncertainties of the proposal model have been estimated lower than other traditional model's. (author)

  14. A Geostatistical Approach to Indoor Surface Sampling Strategies

    DEFF Research Database (Denmark)

    Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg

    1990-01-01

    using the global information. Then methods for choosing a proper sampling area for a single sample of dust on a table are given. The global contamination of an object is determined by a maximum likelihood estimator. Finally, it is shown how specified experimental goals can be included to determine......Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...

  15. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption....... In this paper we investigate a method for testing the mar assumption in the presence of other distributional constraints. We present methods to (approximately) compute a test statistic consisting of the ratio of two profile likelihood functions. This requires the optimization of the likelihood under...... no assumptionson the missingness mechanism, for which we use our recently proposed AI \\& M algorithm. We present experimental results on synthetic data that show that our approximate test statistic is a good indicator for whether data is mar relative to the given distributional assumptions....

  16. Geostatistical Interpolation of Particle-Size Curves in Heterogeneous Aquifers

    Science.gov (United States)

    Guadagnini, A.; Menafoglio, A.; Secchi, P.

    2013-12-01

    We address the problem of predicting the spatial field of particle-size curves (PSCs) from measurements associated with soil samples collected at a discrete set of locations within an aquifer system. Proper estimates of the full PSC are relevant to applications related to groundwater hydrology, soil science and geochemistry and aimed at modeling physical and chemical processes occurring in heterogeneous earth systems. Hence, we focus on providing kriging estimates of the entire PSC at unsampled locations. To this end, we treat particle-size curves as cumulative distribution functions, model their densities as functional compositional data and analyze them by embedding these into the Hilbert space of compositional functions endowed with the Aitchison geometry. On this basis, we develop a new geostatistical methodology for the analysis of spatially dependent functional compositional data. Our functional compositional kriging (FCK) approach allows providing predictions at unsampled location of the entire particle-size curve, together with a quantification of the associated uncertainty, by fully exploiting both the functional form of the data and their compositional nature. This is a key advantage of our approach with respect to traditional methodologies, which treat only a set of selected features (e.g., quantiles) of PSCs. Embedding the full PSC into a geostatistical analysis enables one to provide a complete characterization of the spatial distribution of lithotypes in a reservoir, eventually leading to improved predictions of soil hydraulic attributes through pedotransfer functions as well as of soil geochemical parameters which are relevant in sorption/desorption and cation exchange processes. We test our new method on PSCs sampled along a borehole located within an alluvial aquifer near the city of Tuebingen, Germany. The quality of FCK predictions is assessed through leave-one-out cross-validation. A comparison between hydraulic conductivity estimates obtained

  17. Comparison of smoothness-constrained and geostatistically based cross-borehole electrical resistivity tomography for characterization of solute tracer plumes

    Directory of Open Access Journals (Sweden)

    Andreas Englert

    2016-10-01

    Full Text Available Experiments using electrical resistivity tomography (ERT have shown promising results in reducing the uncertainty of solute plume characteristics related to estimates based on the analysis of local point measurements only. To explore the similarities and differences between two cross-borehole ERT inversion approaches for characterizing salt tracer plumes, namely the classical smoothness-constrained inversion and a geostatistically based approach, we performed two-dimensional synthetic experiments. Simplifying assumptions about the solute transport model and the electrical forward and inverse model allowed us to study the sensitivity of the ERT inversion approaches towards a variety of basic conditions, including the number of boreholes, measurement schemes, contrast between the plume and background electrical conductivity, use of a priori knowledge, and point conditioning. The results show that geostatistically based and smoothness-constrained inversions of electrical resistance data yield plume characteristics of similar quality, which can be further improved when point measurements are incorporated and advantageous measurement schemes are chosen. As expected, an increased number of boreholes included in the ERT measurement layout can highly improve the quality of inferred plume characteristics, while in this case the benefits of point conditioning and advantageous measurement schemes diminish. Both ERT inversion approaches are similarly sensitive to the noise level of the data and the contrast between the solute plume and background electrical conductivity, and robust with regard to biased input parameters, such as mean concentration, variance, and correlation length of the plume. Although sophisticated inversion schemes have recently become available, in which flow and transport as well as electrical forward models are coupled, these schemes effectively rely on a relatively simple geometrical parameterization of the hydrogeological model

  18. How Symmetrical Assumptions Advance Strategic Management Research

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul; Hallberg, Hallberg

    2014-01-01

    We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other...... application domains of the theory. We argue that assumptional symmetry leads to theoretical advancement by promoting the development of theory with greater falsifiability and stronger ontological grounding. Thus, strategic management theory may be advanced by systematically searching for asymmetrical...

  19. Geostatistical methods for the integrated information; Metodos geoestadisticos para la integracion de informacion

    Energy Technology Data Exchange (ETDEWEB)

    Cassiraga, E.F.; Gomez-Hernandez, J.J. [Departamento de Ingenieria Hidraulica y Medio Ambiente, Universidad Politecnica de Valencia, Valencia (Spain)

    1996-10-01

    The main objective of this report is to describe the different geostatistical techniques to use the geophysical and hydrological parameters. We analyze the characteristics of estimation methods used in others studies.

  20. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA

    2017-05-01

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.

  1. Estimation of geotechnical parameters on the basis of geophysical methods and geostatistics

    Science.gov (United States)

    Brom, Aleksander; Natonik, Adrianna

    2017-12-01

    The paper presents possible implementation of ordinary cokriging and geophysical investigation on humidity data acquired in geotechnical studies. The Author describes concept of geostatistics, terminology of geostatistical modelling, spatial correlation functions, principles of solving cokriging systems, advantages of (co-)kriging in comparison with other interpolation methods, obstacles in this type of attempt. Cross validation and discussion of results was performed with an indication of prospect of applying similar procedures in various researches..

  2. Analysis of dengue fever risk using geostatistics model in bone regency

    Science.gov (United States)

    Amran, Stang, Mallongi, Anwar

    2017-03-01

    This research aim is to analysis of dengue fever risk based on Geostatistics model in Bone Regency. Risk levels of dengue fever are denoted by parameter of Binomial distribution. Effect of temperature, rainfalls, elevation, and larvae abundance are investigated through Geostatistics model. Bayesian hierarchical method is used in estimation process. Using dengue fever data in eleven locations this research shows that temperature and rainfall have significant effect of dengue fever risk in Bone regency.

  3. Assumptions of Multiple Regression: Correcting Two Misconceptions

    Directory of Open Access Journals (Sweden)

    Matt N. Williams

    2013-09-01

    Full Text Available In 2002, an article entitled - Four assumptions of multiple regression that researchers should always test- by.Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times.(as of August 2013, and it is one of the first results displayed in a Google search for - regression.assumptions- . While Osborne and Waters' efforts in raising awareness of the need to check assumptions.when using regression are laudable, we note that the original article contained at least two fairly important.misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the.assumption of normally distributed variables; and secondly, that measurement errors necessarily cause.underestimation of simple regression coefficients. In this article, we clarify that multiple regression models.estimated using ordinary least squares require the assumption of normally distributed errors in order for.trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or.predictor variables. Secondly, we point out that regression coefficients in simple regression models will be.biased (toward zero estimates of the relationships between variables of interest when measurement error is.uncorrelated across those variables, but that when correlated measurement error is present, regression.coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of.the assumptions of multiple regression when using ordinary least squares.

  4. Wrong assumptions in the financial crisis

    NARCIS (Netherlands)

    Aalbers, M.B.

    2009-01-01

    Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to

  5. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    Science.gov (United States)

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  6. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

    Science.gov (United States)

    Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment, we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence. PMID:22171810

  7. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. A Classification for a Geostatistical Index of Spatial Dependence

    Directory of Open Access Journals (Sweden)

    Enio Júnior Seidel

    Full Text Available ABSTRACT: In geostatistical studies, spatial dependence can generally be described by means of the semivariogram or, in complementary form, with a single index followed by its categorization to classify the degree of such dependence. The objective of this study was to construct a categorization for the spatial dependence index (SDI proposed by Seidel and Oliveira (2014 in order to classify spatial variability in terms of weak, moderate, and strong dependence. Theoretical values were constructed from different degrees of spatial dependence, which served as a basis for calculation of the SDI. In view of the form of distribution and SDI descriptive measures, we developed a categorization for posterior classification of spatial dependence, specific to each semivariogram model. The SDI categorization was based on its median and 3rd quartile, allowing us to classify spatial dependence as weak, moderate, or strong. We established that for the spherical semivariogram: SDISpherical (% ≤ 7 % (weak spatial dependence, 7 % 15 % (strong spatial dependence; for the exponential semivariogram: SDIExponential (% ≤ 6 % (weak spatial dependence, 6 % 13 % (strong spatial dependence; and for the Gaussian semivariogram: SDIGaussian (% ≤ 9 % (weak spatial dependence, 9 % 20 % (strong spatial dependence. The proposed categorization allows the user to transform the numerical values calculated for SDI into categories of variability of spatial dependence, with adequate power for explanation and comparison.

  9. Bayesian geostatistical modeling of leishmaniasis incidence in Brazil.

    Directory of Open Access Journals (Sweden)

    Dimitrios-Alexios Karagiannis-Voules

    Full Text Available BACKGROUND: Leishmaniasis is endemic in 98 countries with an estimated 350 million people at risk and approximately 2 million cases annually. Brazil is one of the most severely affected countries. METHODOLOGY: We applied Bayesian geostatistical negative binomial models to analyze reported incidence data of cutaneous and visceral leishmaniasis in Brazil covering a 10-year period (2001-2010. Particular emphasis was placed on spatial and temporal patterns. The models were fitted using integrated nested Laplace approximations to perform fast approximate Bayesian inference. Bayesian variable selection was employed to determine the most important climatic, environmental, and socioeconomic predictors of cutaneous and visceral leishmaniasis. PRINCIPAL FINDINGS: For both types of leishmaniasis, precipitation and socioeconomic proxies were identified as important risk factors. The predicted number of cases in 2010 were 30,189 (standard deviation [SD]: 7,676 for cutaneous leishmaniasis and 4,889 (SD: 288 for visceral leishmaniasis. Our risk maps predicted the highest numbers of infected people in the states of Minas Gerais and Pará for visceral and cutaneous leishmaniasis, respectively. CONCLUSIONS/SIGNIFICANCE: Our spatially explicit, high-resolution incidence maps identified priority areas where leishmaniasis control efforts should be targeted with the ultimate goal to reduce disease incidence.

  10. Study on geological environment model using geostatistics method

    International Nuclear Information System (INIS)

    Honda, Makoto; Suzuki, Makoto; Sakurai, Hideyuki; Iwasa, Kengo; Matsui, Hiroya

    2005-03-01

    The purpose of this study is to develop the geostatistical procedure for modeling geological environments and to evaluate the quantitative relationship between the amount of information and the reliability of the model using the data sets obtained in the surface-based investigation phase (Phase 1) of the Horonobe Underground Research Laboratory Project. This study lasts for three years from FY2004 to FY2006 and this report includes the research in FY2005 as the second year of three-year study. In FY2005 research, the hydrogeological model was built as well as FY2004 research using the data obtained from the deep boreholes (HDB-6, 7 and 8) and the ground magnetotelluric (AMT) survey which were executed in FY2004 in addition to the data sets used in the first year of study. Above all, the relationship between the amount of information and the reliability of the model was demonstrated through a comparison of the models at each step which corresponds to the investigation stage in each FY. Furthermore, the statistical test was applied for detecting the difference of basic statistics of various data due to geological features with a view to taking the geological information into the modeling procedures. (author)

  11. Bayesian Analysis of Geostatistical Models With an Auxiliary Lattice

    KAUST Repository

    Park, Jincheol

    2012-04-01

    The Gaussian geostatistical model has been widely used for modeling spatial data. However, this model suffers from a severe difficulty in computation: it requires users to invert a large covariance matrix. This is infeasible when the number of observations is large. In this article, we propose an auxiliary lattice-based approach for tackling this difficulty. By introducing an auxiliary lattice to the space of observations and defining a Gaussian Markov random field on the auxiliary lattice, our model completely avoids the requirement of matrix inversion. It is remarkable that the computational complexity of our method is only O(n), where n is the number of observations. Hence, our method can be applied to very large datasets with reasonable computational (CPU) times. The numerical results indicate that our model can approximate Gaussian random fields very well in terms of predictions, even for those with long correlation lengths. For real data examples, our model can generally outperform conventional Gaussian random field models in both prediction errors and CPU times. Supplemental materials for the article are available online. © 2012 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  12. Comparative study of the geostatistical ore reserve estimation method over the conventional methods

    International Nuclear Information System (INIS)

    Kim, Y.C.; Knudsen, H.P.

    1975-01-01

    Part I contains a comprehensive treatment of the comparative study of the geostatistical ore reserve estimation method over the conventional methods. The conventional methods chosen for comparison were: (a) the polygon method, (b) the inverse of the distance squared method, and (c) a method similar to (b) but allowing different weights in different directions. Briefly, the overall result from this comparative study is in favor of the use of geostatistics in most cases because the method has lived up to its theoretical claims. A good exposition on the theory of geostatistics, the adopted study procedures, conclusions and recommended future research are given in Part I. Part II of this report contains the results of the second and the third study objectives, which are to assess the potential benefits that can be derived by the introduction of the geostatistical method to the current state-of-the-art in uranium reserve estimation method and to be instrumental in generating the acceptance of the new method by practitioners through illustrative examples, assuming its superiority and practicality. These are given in the form of illustrative examples on the use of geostatistics and the accompanying computer program user's guide

  13. Geostatistical Study of Precipitation on the Island of Crete

    Science.gov (United States)

    Agou, Vasiliki D.; Varouchakis, Emmanouil A.; Hristopulos, Dionissios T.

    2015-04-01

    precipitation which are fitted locally to a three-parameter probability distribution, based on which a normalized index is derived. We use the Spartan variogram function to model space-time correlations, because it is more flexible than classical models [3]. The performance of the variogram model is tested by means of leave-one-out cross validation. The variogram model is then used in connection with ordinary kriging to generate precipitation maps for the entire island. In the future, we will explore the joint spatiotemporal evolution of precipitation patterns on Crete. References [1] P. Goovaerts. Geostatistical approaches for incorporating elevation into the spatial interpolation of precipitation. Journal of Hydrology, 228(1):113-129, 2000. [2] N. B. Guttman. Accepting the standardized precipitation index: a calculation algorithm. American Water Resource Association, 35(2):311-322, 1999. [3] D. T Hristopulos. Spartan Gibbs random field models for geostatistical applications. SIAM Journal on Scientific Computing, 24(6):2125-2162, 2003. [4] A.G. Koutroulis, A.-E.K. Vrohidou, and I.K. Tsanis. Spatiotemporal characteristics of meteorological drought for the island of Crete. Journal of Hydrometeorology, 12(2):206-226, 2011. [5] T. B. McKee, N. J. Doesken, and J. Kleist. The relationship of drought frequency and duration to time scales. In Proceedings of the 8th Conference on Applied Climatology, page 179-184, Anaheim, California, 1993.

  14. Unsupervised classification of multivariate geostatistical data: Two algorithms

    Science.gov (United States)

    Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques

    2015-12-01

    With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.

  15. Geostatistical ore reserve estimation for a roll-front type uranium deposit (practitioner's guide)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y.C.; Knudsen, H.P.

    1977-01-01

    This report comprises two parts. Part I contains illustrative examples of each phase of a geostatistical study using a roll-front type uranium deposit. Part II contains five computer programs and comprehensive users' manuals for these programs which are necessary to make a practical geostatistical study. (LK)

  16. Application of Bayesian geostatistics for evaluation of mass discharge uncertainty at contaminated sites

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Lange, Ida Vedel

    2012-01-01

    , and (3) uncertain source zone and transport parameters. The method generates conditional realizations of the spatial flow and concentration distribution. An analytical macrodispersive transport solution is employed to simulate the mean concentration distribution, and a geostatistical model of the Box......Mass discharge estimates are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Such estimates are, however, rather uncertain as they integrate uncertain spatial distributions of both concentration and groundwater flow....... Here a geostatistical simulation method for quantifying the uncertainty of the mass discharge across a multilevel control plane is presented. The method accounts for (1) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics, (2) measurement uncertainty...

  17. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    Rose, Jeremy; Aaen, Ivan; Nielsen, Peter Axel

    2008-01-01

    thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...... in different countries operating different economic and social models. Characterizing CMMI in this way opens the door to another question: are there other sets of organisational and management assumptions which would be better suited to other types of organisations operating in other cultural contexts?...

  18. Life Support Baseline Values and Assumptions Document

    Science.gov (United States)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  19. The Possibilities and Limitations of Geostatistical Methods in Real Estate Market Analyses

    Directory of Open Access Journals (Sweden)

    Cellmer Radosław

    2014-10-01

    Full Text Available In the traditional approach, geostatistical modeling involves analyses of the spatial structure of regionalized data, as well as estimations and simulations that rely on kriging methods. Geostatistical methods can complement traditional statistical models of property transaction prices, and when combined with those models, they offer a comprehensive tool for spatial analysis that is used in the process of developing land value maps. Transaction prices are characterized by mutual spatial correlations and can be considered as regionalized variables. They can also be regarded as random variables that have a local character and a specific probability distribution.

  20. Inverse Tasks In The Tsunami Problem: Nonlinear Regression With Inaccurate Input Data

    Science.gov (United States)

    Lavrentiev, M.; Shchemel, A.; Simonov, K.

    A variant of modified training functional that allows considering inaccurate input data is suggested. A limiting case when a part of input data is completely undefined, and, therefore, a problem of reconstruction of hidden parameters should be solved, is also considered. Some numerical experiments are presented. It is assumed that a dependence of known output variables on known input ones should be found is the classic problem definition, which is widely used in the majority of neural nets algorithms. The quality of approximation is evaluated as a performance function. Often the error of the task is evaluated as squared distance between known input data and predicted data multiplied by weighed coefficients. These coefficients may be named "precision coefficients". When inputs are not known exactly, natural generalization of performance function is adding member that responsible for distance between known inputs and shifted inputs, which lessen model's error. It is desirable that the set of variable parameters is compact for training to be con- verging. In the above problem it is possible to choose variants of demands of a priori compactness, which allow meaningful interpretation in the smoothness of the model dependence. Two kinds of regularization was used, first limited squares of coefficients responsible for nonlinearity and second limited multiplication of the above coeffi- cients and linear coefficients. Asymptotic universality of neural net ability to approxi- mate various smooth functions with any accuracy by increase of the number of tunable parameters is often the base for selecting a type of neural net approximation. It is pos- sible to show that used neural net will approach to Fourier integral transform, which approximate abilities are known, with increasing of the number of tunable parameters. In the limiting case, when input data is set with zero precision, the problem of recon- struction of hidden parameters with observed output data appears. The

  1. Extracurricular Business Planning Competitions: Challenging the Assumptions

    Science.gov (United States)

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  2. Culturally Biased Assumptions in Counseling Psychology

    Science.gov (United States)

    Pedersen, Paul B.

    2003-01-01

    Eight clusters of culturally biased assumptions are identified for further discussion from Leong and Ponterotto's (2003) article. The presence of cultural bias demonstrates that cultural bias is so robust and pervasive that is permeates the profession of counseling psychology, even including those articles that effectively attack cultural bias…

  3. Mexican-American Cultural Assumptions and Implications.

    Science.gov (United States)

    Carranza, E. Lou

    The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…

  4. Assumptions of Multiple Regression: Correcting Two Misconceptions

    Science.gov (United States)

    Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason

    2013-01-01

    In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…

  5. Categorical Judgment Scaling with Ordinal Assumptions.

    Science.gov (United States)

    Hofacker, C F

    1984-01-01

    One of the most common activities of psychologists and other researchers is to construct Likert scales and then proceed to analyze them as if the numbers constituted an equal interval scale. There are several alternatives to this procedure (Thurstone & Chave, 1929; Muthen, 1983) that make normality assumptions but which do not assume that the answer categories as used by subjects constitute an equal interval scale. In this paper a new alternative is proposed that uses additive conjoint measurement. It is assumed that subjects can report their attitudes towards stimuli in the appropriate rank order. Neither within-subject nor between-subject distributional assumptions are made. Nevertheless, interval level stimulus values, as well as response category boundaries, are extracted by the procedure. This approach is applied to three sets of attitude data. In these three cases, the equal interval assumption is clearly wrong. Despite this, arithmetic means seem to closely reflect group attitudes towards the stimuli. In one data set, the normality assumption of Thurstone and Chave (1929) and Muthen (1983) is supported, and in the two others it is supported with reservations.

  6. Critically Challenging Some Assumptions in HRD

    Science.gov (United States)

    O'Donnell, David; McGuire, David; Cross, Christine

    2006-01-01

    This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…

  7. Causal Mediation Analysis: Warning! Assumptions Ahead

    Science.gov (United States)

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  8. Shattering world assumptions: A prospective view of the impact of adverse events on world assumptions.

    Science.gov (United States)

    Schuler, Eric R; Boals, Adriel

    2016-05-01

    Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. A spatiotemporal geostatistical hurdle model approach for short-term deforestation prediction

    NARCIS (Netherlands)

    Ribeiro Sales, Marcio; Bruin, De Sytze; Herold, Martin; Kyriakidis, Phaedon; Souza, Carlos

    2017-01-01

    This paper introduces and tests a geostatistical spatiotemporal hurdle approach for predicting the spatial distribution of future deforestation (one to three years ahead in time). The method accounts for neighborhood effects by modeling the auto-correlation of occurrence and intensity of

  10. Assessment and modeling of the groundwater hydrogeochemical quality parameters via geostatistical approaches

    Science.gov (United States)

    Karami, Shawgar; Madani, Hassan; Katibeh, Homayoon; Fatehi Marj, Ahmad

    2018-03-01

    Geostatistical methods are one of the advanced techniques used for interpolation of groundwater quality data. The results obtained from geostatistics will be useful for decision makers to adopt suitable remedial measures to protect the quality of groundwater sources. Data used in this study were collected from 78 wells in Varamin plain aquifer located in southeast of Tehran, Iran, in 2013. Ordinary kriging method was used in this study to evaluate groundwater quality parameters. According to what has been mentioned in this paper, seven main quality parameters (i.e. total dissolved solids (TDS), sodium adsorption ratio (SAR), electrical conductivity (EC), sodium (Na+), total hardness (TH), chloride (Cl-) and sulfate (SO4 2-)), have been analyzed and interpreted by statistical and geostatistical methods. After data normalization by Nscore method in WinGslib software, variography as a geostatistical tool to define spatial regression was compiled and experimental variograms were plotted by GS+ software. Then, the best theoretical model was fitted to each variogram based on the minimum RSS. Cross validation method was used to determine the accuracy of the estimated data. Eventually, estimation maps of groundwater quality were prepared in WinGslib software and estimation variance map and estimation error map were presented to evaluate the quality of estimation in each estimated point. Results showed that kriging method is more accurate than the traditional interpolation methods.

  11. Assessment of geostatistical features for object-based image classification of contrasted landscape vegetation cover

    Science.gov (United States)

    de Oliveira Silveira, Eduarda Martiniano; de Menezes, Michele Duarte; Acerbi Júnior, Fausto Weimar; Castro Nunes Santos Terra, Marcela; de Mello, José Márcio

    2017-07-01

    Accurate mapping and monitoring of savanna and semiarid woodland biomes are needed to support the selection of areas of conservation, to provide sustainable land use, and to improve the understanding of vegetation. The potential of geostatistical features, derived from medium spatial resolution satellite imagery, to characterize contrasted landscape vegetation cover and improve object-based image classification is studied. The study site in Brazil includes cerrado sensu stricto, deciduous forest, and palm swamp vegetation cover. Sentinel 2 and Landsat 8 images were acquired and divided into objects, for each of which a semivariogram was calculated using near-infrared (NIR) and normalized difference vegetation index (NDVI) to extract the set of geostatistical features. The features selected by principal component analysis were used as input data to train a random forest algorithm. Tests were conducted, combining spectral and geostatistical features. Change detection evaluation was performed using a confusion matrix and its accuracies. The semivariogram curves were efficient to characterize spatial heterogeneity, with similar results using NIR and NDVI from Sentinel 2 and Landsat 8. Accuracy was significantly greater when combining geostatistical features with spectral data, suggesting that this method can improve image classification results.

  12. Spatial interpolation of forest conditions using co-conditional geostatistical simulation

    Science.gov (United States)

    H. Todd Mowrer

    2000-01-01

    In recent work the author used the geostatistical Monte Carlo technique of sequential Gaussian simulation (s.G.s.) to investigate uncertainty in a GIS analysis of potential old-growth forest areas. The current study compares this earlier technique to that of co-conditional simulation, wherein the spatial cross-correlations between variables are included. As in the...

  13. Space-time geostatistics for geography: a case study of radiation monitoring across parts of Germany

    NARCIS (Netherlands)

    Heuvelink, G.B.M.; Griffith, D.A.

    2010-01-01

    Many branches within geography deal with variables that vary not only in space but also in time. Therefore, conventional geostatistics needs to be extended with methods that estimate and quantify spatiotemporal variation and use it in spatiotemporal interpolation and stochastic simulation. This

  14. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior

  15. Genome-wide selection by mixed model ridge regression and extensions based on geostatistical models.

    Science.gov (United States)

    Schulz-Streeck, Torben; Piepho, Hans-Peter

    2010-03-31

    The success of genome-wide selection (GS) approaches will depend crucially on the availability of efficient and easy-to-use computational tools. Therefore, approaches that can be implemented using mixed models hold particular promise and deserve detailed study. A particular class of mixed models suitable for GS is given by geostatistical mixed models, when genetic distance is treated analogously to spatial distance in geostatistics. We consider various spatial mixed models for use in GS. The analyses presented for the QTL-MAS 2009 dataset pay particular attention to the modelling of residual errors as well as of polygenetic effects. It is shown that geostatistical models are viable alternatives to ridge regression, one of the common approaches to GS. Correlations between genome-wide estimated breeding values and true breeding values were between 0.879 and 0.889. In the example considered, we did not find a large effect of the residual error variance modelling, largely because error variances were very small. A variance components model reflecting the pedigree of the crosses did not provide an improved fit. We conclude that geostatistical models deserve further study as a tool to GS that is easily implemented in a mixed model package.

  16. Combining Geostatistics with Moran’s I Analysis for Mapping Soil Heavy Metals in Beijing, China

    Directory of Open Access Journals (Sweden)

    Bao-Guo Li

    2012-03-01

    Full Text Available Production of high quality interpolation maps of heavy metals is important for risk assessment of environmental pollution. In this paper, the spatial correlation characteristics information obtained from Moran’s I analysis was used to supplement the traditional geostatistics. According to Moran’s I analysis, four characteristics distances were obtained and used as the active lag distance to calculate the semivariance. Validation of the optimality of semivariance demonstrated that using the two distances where the Moran’s I and the standardized Moran’s I, Z(I reached a maximum as the active lag distance can improve the fitting accuracy of semivariance. Then, spatial interpolation was produced based on the two distances and their nested model. The comparative analysis of estimation accuracy and the measured and predicted pollution status showed that the method combining geostatistics with Moran’s I analysis was better than traditional geostatistics. Thus, Moran’s I analysis is a useful complement for geostatistics to improve the spatial interpolation accuracy of heavy metals.

  17. The 'revealed preferences' theory: Assumptions and conjectures

    International Nuclear Information System (INIS)

    Green, C.H.

    1983-01-01

    Being kind of intuitive psychology the 'Revealed-Preferences'- theory based approaches towards determining the acceptable risks are a useful method for the generation of hypotheses. In view of the fact that reliability engineering develops faster than methods for the determination of reliability aims the Revealed-Preferences approach is a necessary preliminary help. Some of the assumptions on which the 'Revealed-Preferences' theory is based will be identified and analysed and afterwards compared with experimentally obtained results. (orig./DG) [de

  18. How to Handle Assumptions in Synthesis

    Directory of Open Access Journals (Sweden)

    Roderick Bloem

    2014-07-01

    Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.

  19. Towards New Probabilistic Assumptions in Business Intelligence

    OpenAIRE

    Schumann Andrew; Szelc Andrzej

    2015-01-01

    One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot ...

  20. Introduction to This Special Issue on Geostatistics and Geospatial Techniques in Remote Sensing

    Science.gov (United States)

    Atkinson, Peter; Quattrochi, Dale A.; Goodman, H. Michael (Technical Monitor)

    2000-01-01

    The germination of this special Computers & Geosciences (C&G) issue began at the Royal Geographical Society (with the Institute of British Geographers) (RGS-IBG) annual meeting in January 1997 held at the University of Exeter, UK. The snow and cold of the English winter were tempered greatly by warm and cordial discussion of how to stimulate and enhance cooperation on geostatistical and geospatial research in remote sensing 'across the big pond' between UK and US researchers. It was decided that one way forward would be to hold parallel sessions in 1998 on geostatistical and geospatial research in remote sensing at appropriate venues in both the UK and the US. Selected papers given at these sessions would be published as special issues of C&G on the UK side and Photogrammetric Engineering and Remote Sensing (PE&RS) on the US side. These issues would highlight the commonality in research on geostatistical and geospatial research in remote sensing on both sides of the Atlantic Ocean. As a consequence, a session on "Geostatistics and Geospatial Techniques for Remote Sensing of Land Surface Processes" was held at the RGS-IBG annual meeting in Guildford, Surrey, UK in January 1998, organized by the Modeling and Advanced Techniques Special Interest Group (MAT SIG) of the Remote Sensing Society (RSS). A similar session was held at the Association of American Geographers (AAG) annual meeting in Boston, Massachusetts in March 1998, sponsored by the AAG's Remote Sensing Specialty Group (RSSG). The 10 papers that make up this issue of C&G, comprise 7 papers from the UK and 3 papers from the LIS. We are both co-editors of each of the journal special issues, with the lead editor of each journal issue being from their respective side of the Atlantic. The special issue of PE&RS (vol. 65) that constitutes the other half of this co-edited journal series was published in early 1999, comprising 6 papers by US authors. We are indebted to the International Association for Mathematical

  1. Hypo- and hypernatremia results in inaccurate erythrocyte mean corpuscular volume measurement in vitro, when using Sysmex XE 2100

    DEFF Research Database (Denmark)

    Phillipsen, Jens Peter; Madsen, Kirsten Vikkelsø

    2015-01-01

    INTRODUCTION: Automated hematology analyzers dilute patient erythrocytes with an isoosmotic diluent before quantitating the erythrocyte mean cell volume (MCV). However, if patient plasma osmolality differs from the diluent, water will cross the erythrocytes membrane and establish a new equilibrium...... resulted in inaccurate MCV. The experimental results also revealed a strong correlation between P-Osmolality/P-Sodium and MCV inaccuracy (R2 = 0.70/0.85) similar to the theoretically calculated MCV inaccuracy. We suggest using mean cellular Hb (MCH) instead of MCV, mean corpuscular Hb concentration (MCHC...

  2. New Assumptions to Guide SETI Research

    Science.gov (United States)

    Colombano, S. P.

    2018-01-01

    The recent Kepler discoveries of Earth-like planets offer the opportunity to focus our attention on detecting signs of life and technology in specific planetary systems, but I feel we need to become more flexible in our assumptions. The reason is that, while it is still reasonable and conservative to assume that life is most likely to have originated in conditions similar to ours, the vast time differences in potential evolutions render the likelihood of "matching" technologies very slim. In light of these challenges I propose a more "aggressive"� approach to future SETI exploration in directions that until now have received little consideration.

  3. Limiting assumptions in molecular modeling: electrostatics.

    Science.gov (United States)

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  4. Assumptions for the Annual Energy Outlook 1992

    International Nuclear Information System (INIS)

    1992-01-01

    This report serves a auxiliary document to the Energy Information Administration (EIA) publication Annual Energy Outlook 1992 (AEO) (DOE/EIA-0383(92)), released in January 1992. The AEO forecasts were developed for five alternative cases and consist of energy supply, consumption, and price projections by major fuel and end-use sector, which are published at a national level of aggregation. The purpose of this report is to present important quantitative assumptions, including world oil prices and macroeconomic growth, underlying the AEO forecasts. The report has been prepared in response to external requests, as well as analyst requirements for background information on the AEO and studies based on the AEO forecasts

  5. Explorations in statistics: the assumption of normality.

    Science.gov (United States)

    Curran-Everett, Douglas

    2017-09-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This twelfth installment of Explorations in Statistics explores the assumption of normality, an assumption essential to the meaningful interpretation of a t test. Although the data themselves can be consistent with a normal distribution, they need not be. Instead, it is the theoretical distribution of the sample mean or the theoretical distribution of the difference between sample means that must be roughly normal. The most versatile approach to assess normality is to bootstrap the sample mean, the difference between sample means, or t itself. We can then assess whether the distributions of these bootstrap statistics are consistent with a normal distribution by studying their normal quantile plots. If we suspect that an inference we make from a t test may not be justified-if we suspect that the theoretical distribution of the sample mean or the theoretical distribution of the difference between sample means is not normal-then we can use a permutation method to analyze our data. Copyright © 2017 the American Physiological Society.

  6. Forecasting Renewable Energy Consumption under Zero Assumptions

    Directory of Open Access Journals (Sweden)

    Jie Ma

    2018-02-01

    Full Text Available Renewable energy, as an environmentally friendly and sustainable source of energy, is key to realizing the nationally determined contributions of the United States (US to the December 2015 Paris agreement. Policymakers in the US rely on energy forecasts to draft and implement cost-minimizing, efficient and realistic renewable and sustainable energy policies but the inaccuracies in past projections are considerably high. The inaccuracies and inconsistencies in forecasts are due to the numerous factors considered, massive assumptions and modeling flaws in the underlying model. Here, we propose and apply a machine learning forecasting algorithm devoid of massive independent variables and assumptions to model and forecast renewable energy consumption (REC in the US. We employ the forecasting technique to make projections on REC from biomass (REC-BMs and hydroelectric (HE-EC sources for the 2009–2016 period. We find that, relative to reference case projections in Energy Information Administration’s Annual Energy Outlook 2008, projections based on our proposed technique present an enormous improvement up to ~138.26-fold on REC-BMs and ~24.67-fold on HE-EC; and that applying our technique saves the US ~2692.62PJ petajoules(PJ on HE-EC and ~9695.09PJ on REC-BMs for the 8-year forecast period. The achieved high-accuracy is also replicable to other regions.

  7. Increasing the predictive power of geostatistical reservoir models by integration of geological constraints from stratigraphic forward modeling

    NARCIS (Netherlands)

    Sacchi, Q.; Borello, E.S.; Weltje, G.J.; Dalman, R.

    2016-01-01

    Current static reservoir models are created by quantitative integration of interpreted well and seismic data through geostatistical tools. In these models, equiprobable realizations of structural settings and property distributions can be generated by stochastic simulation techniques. The

  8. SRS 2010 Vegetation Inventory GeoStatistical Mapping Results for Custom Reaction Intensity and Total Dead Fuels.

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Lloyd A. [Leading Solutions, LLC.; Paresol, Bernard [U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station, Portland, OR.

    2014-09-01

    This report of the geostatistical analysis results of the fire fuels response variables, custom reaction intensity and total dead fuels is but a part of an SRS 2010 vegetation inventory project. For detailed description of project, theory and background including sample design, methods, and results please refer to USDA Forest Service Savannah River Site internal report “SRS 2010 Vegetation Inventory GeoStatistical Mapping Report”, (Edwards & Parresol 2013).

  9. Challenging the assumptions for thermal sensation scales

    DEFF Research Database (Denmark)

    Schweiker, Marcel; Fuchs, Xaver; Becker, Susanne

    2016-01-01

    Scales are widely used to assess the personal experience of thermal conditions in built environments. Most commonly, thermal sensation is assessed, mainly to determine whether a particular thermal condition is comfortable for individuals. A seven-point thermal sensation scale has been used...... extensively, which is suitable for describing a one-dimensional relationship between physical parameters of indoor environments and subjective thermal sensation. However, human thermal comfort is not merely a physiological but also a psychological phenomenon. Thus, it should be investigated how scales for its...... assessment could benefit from a multidimensional conceptualization. The common assumptions related to the usage of thermal sensation scales are challenged, empirically supported by two analyses. These analyses show that the relationship between temperature and subjective thermal sensation is non...

  10. Use of geostatistics on broiler production for evaluation of different minimum ventilation systems during brooding phase

    Directory of Open Access Journals (Sweden)

    Thayla Morandi Ridolfi de Carvalho

    2012-01-01

    Full Text Available The objective of this research was to evaluate different minimum ventilation systems, in relation to air quality and thermal comfort using geostatistics in brooding phase. The minimum ventilation systems were: Blue House I: exhaust fans + curtain management (end of the building; Blue House II: exhaust fans + side curtain management; and Dark House: exhaust fans + flag. The climate variables evaluated were: dry bulb temperature, relative humidity, air velocity, carbon dioxide and ammonia concentration, during winter time, at 9 a.m., in 80 equidistant points in brooding area. Data were evaluated by geostatistic technique. The results indicate that Wider broiler houses (above 15.0 m width present the greatest ammonia and humidity concentration. Blue House II present the best results in relation to air quality. However, none of the studied broiler houses present an ideal thermal comfort.

  11. Evaluation of spatial variability of metal bioavailability in soils using geostatistics

    DEFF Research Database (Denmark)

    Owsianiak, Mikolaj; Hauschild, Michael Zwicky; Rosenbaum, Ralph K.

    2012-01-01

    Soil properties show signifficant spatial variability at local, regional and continental scales. This is a challenge for life cycle impact assessment (LCIA) of metals, because fate, bioavailability and effect factors are controlled by environmental chemistry and can vary orders of magnitude...... for different soils. Here, variography is employed to analyse spatial variability of bioavailability factors (BFs) of metals at the global scale. First, published empirical regressions are employed to calculate BFs of metals for 7180 topsoil profiles. Next, geostatistical interpretation of calculated BFs...... is performed using ArcGIS Geostatistical Analyst. Results show that BFs of copper span a range of 6 orders of magnitude, and have signifficant spatial variability at local and continental scales. The model nugget variance is signifficantly higher than zero, suggesting the presence of spatial variability...

  12. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John

    of both concentration and groundwater flow. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across...... and the hydraulic gradient across the control plane and are consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box......Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions...

  13. Spatially explicit Schistosoma infection risk in eastern Africa using Bayesian geostatistical modelling

    DEFF Research Database (Denmark)

    Schur, Nadine; Hürlimann, Eveline; Stensgaard, Anna-Sofie

    2013-01-01

    surveys on different age-groups and to acquire separate estimates for individuals aged ≤20 years and entire communities. Prevalence estimates were combined with population statistics to obtain country-specific numbers of Schistosoma infections. We estimate that 122 million individuals in eastern Africa...... Africa. Bayesian geostatistical models based on climatic and other environmental data were used to account for potential spatial clustering in spatially structured exposures. Geostatistical variable selection was employed to reduce the set of covariates. Alignment factors were implemented to combine...... are currently infected with either S. mansoni, or S. haematobium, or both species concurrently. Country-specific population-adjusted prevalence estimates range between 12.9% (Uganda) and 34.5% (Mozambique) for S. mansoni and between 11.9% (Djibouti) and 40.9% (Mozambique) for S. haematobium. Our models revealed...

  14. Geostatistical Spatio-Time model of crime in el Salvador: Structural and Predictive Analysis

    Directory of Open Access Journals (Sweden)

    Welman Rosa Alvarado

    2011-07-01

    Full Text Available Today, to study a geospatial and spatio-temporal phenomena requires searching statistical tools that enable the analysis of the dependency of space, time and interactions. The science that studies this kind of subjects is the Geoestatics which the goal is to predict spatial phenomenon. This science is considered the base for modeling phenomena that involves interactions between space and time. In the past 10 years, the Geostatistic had seen a great development in areas like the geology, soils, remote sensing, epidemiology, agriculture, ecology, economy, etc. In this research, the geostatistic had been apply to build a predictive map about crime in El Salvador; for that the variability of space and time together is studied to generate crime scenarios: crime hot spots are determined, crime vulnerable groups are identified, to improve political decisions and facilitate to decision makers about the insecurity in the country.

  15. Big assumptions for small samples in crop insurance

    Science.gov (United States)

    Ashley Elaine Hungerford; Barry Goodwin

    2014-01-01

    The purpose of this paper is to investigate the effects of crop insurance premiums being determined by small samples of yields that are spatially correlated. If spatial autocorrelation and small sample size are not properly accounted for in premium ratings, the premium rates may inaccurately reflect the risk of a loss.

  16. A geostatistical estimation of zinc grade in bore-core samples

    International Nuclear Information System (INIS)

    Starzec, A.

    1987-01-01

    Possibilities and preliminary results of geostatistical interpretation of the XRF determination of zinc in bore-core samples are considered. For the spherical model of the variogram the estimation variance of grade in a disk-shape sample (estimated from the grade on the circumference sample) is calculated. Variograms of zinc grade in core samples are presented and examples of the grade estimation are discussed. 4 refs., 7 figs., 1 tab. (author)

  17. Heritability of slow and/or inaccurate reading ability in 33,000 adult twins with self-reported data

    DEFF Research Database (Denmark)

    Fibiger-Dagnæs, Steen; von Bornemann Hjelmborg, Jacob; Erbs, Lena

    2012-01-01

    Genetic influence for adult slow and/or inaccurate reading ability was studied from selfreported answers, using a dichotomous question on having difficulties in reading the Danish subtitles on foreign TV programs. The data from 33,424 twins were population based and were used for biometric analysis...... in order to estimate the heritability of reading difficulties. The rate of reading difficulties were 6–9 percent, higher for males than females. Tetrachoric correlations were estimated under univariate saturated models, specified with appropriate constraints. Hierarchical x2 tests showed that the fit...... demonstrates substantial genetic, almost only additive, influence on reading difficulties. The environmental factors affecting reading difficulties were unique and unshared (E). Unshared environmental factors are those factors that are specific to an individual, thus contributing to differences between family...

  18. A conceptual sedimentological-geostatistical model of aquifer heterogeneity based on outcrop studies

    Energy Technology Data Exchange (ETDEWEB)

    Davis, J.M.

    1994-01-01

    Three outcrop studies were conducted in deposits of different depositional environments. At each site, permeability measurements were obtained with an air-minipermeameter developed as part of this study. In addition, the geological units were mapped with either surveying, photographs, or both. Geostatistical analysis of the permeability data was performed to estimate the characteristics of the probability distribution function and the spatial correlation structure. The information obtained from the geological mapping was then compared with the results of the geostatistical analysis for any relationships that may exist. The main field site was located in the Albuquerque Basin of central New Mexico at an outcrop of the Pliocene-Pleistocene Sierra Ladrones Formation. The second study was conducted on the walls of waste pits in alluvial fan deposits at the Nevada Test Site. The third study was conducted on an outcrop of an eolian deposit (miocene) south of Socorro, New Mexico. The results of the three studies were then used to construct a conceptual model relating depositional environment to geostatistical models of heterogeneity. The model presented is largely qualitative but provides a basis for further hypothesis formulation and testing.

  19. Spatial Downscaling of TRMM Precipitation Using Geostatistics and Fine Scale Environmental Variables

    Directory of Open Access Journals (Sweden)

    No-Wook Park

    2013-01-01

    Full Text Available A geostatistical downscaling scheme is presented and can generate fine scale precipitation information from coarse scale Tropical Rainfall Measuring Mission (TRMM data by incorporating auxiliary fine scale environmental variables. Within the geostatistical framework, the TRMM precipitation data are first decomposed into trend and residual components. Quantitative relationships between coarse scale TRMM data and environmental variables are then estimated via regression analysis and used to derive trend components at a fine scale. Next, the residual components, which are the differences between the trend components and the original TRMM data, are then downscaled at a target fine scale via area-to-point kriging. The trend and residual components are finally added to generate fine scale precipitation estimates. Stochastic simulation is also applied to the residual components in order to generate multiple alternative realizations and to compute uncertainty measures. From an experiment using a digital elevation model (DEM and normalized difference vegetation index (NDVI, the geostatistical downscaling scheme generated the downscaling results that reflected detailed characteristics with better predictive performance, when compared with downscaling without the environmental variables. Multiple realizations and uncertainty measures from simulation also provided useful information for interpretations and further environmental modeling.

  20. How to evaluate the risks of exceeding limits: geostatistical models and their application to air pollution

    International Nuclear Information System (INIS)

    Fouquet, Ch. de; Deraisme, J.; Bobbia, M.

    2007-01-01

    Geo-statistics is increasingly applied to the study of environmental risks in a variety of sectors, especially in the fields of soil decontamination and the evaluation of the risks due to air pollution. Geo-statistics offers a rigorous stochastic modeling approach that makes it possible to answer questions expressed in terms of uncertainty and risk. This article focusses on nonlinear geo-statistical methods, based on the Gaussian random function model, whose essential properties are summarised. We use two examples to characterize situations where direct and thus rapid methods provide appropriate solutions and cases that inevitably require more laborious simulation techniques. Exposure of the population of the Rouen metropolitan area to the risk of NO 2 pollution is assessed by simulations, but the surface area where the pollution exceeds the threshold limit can be easily estimated with nonlinear conditional expectation techniques. A second example is used to discuss the bias introduced by direct simulation, here of a percentile of daily SO 2 concentration for one year in the city of Le Havre; an operational solution is proposed. (authors)

  1. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar

    2015-07-21

    A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference dataset indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local scale estimates of precipitation and temperature from General Circulation Models. This article is protected by copyright. All rights reserved.

  2. Multivariate analysis and geostatistics of the fertility of a humic rhodic hapludox under coffee cultivation

    Directory of Open Access Journals (Sweden)

    Samuel de Assis Silva

    2012-04-01

    Full Text Available The spatial variability of soil and plant properties exerts great influence on the yeld of agricultural crops. This study analyzed the spatial variability of the fertility of a Humic Rhodic Hapludox with Arabic coffee, using principal component analysis, cluster analysis and geostatistics in combination. The experiment was carried out in an area under Coffea arabica L., variety Catucai 20/15 - 479. The soil was sampled at a depth 0.20 m, at 50 points of a sampling grid. The following chemical properties were determined: P, K+, Ca2+, Mg2+, Na+, S, Al3+, pH, H + Al, SB, t, T, V, m, OM, Na saturation index (SSI, remaining phosphorus (P-rem, and micronutrients (Zn, Fe, Mn, Cu and B. The data were analyzed with descriptive statistics, followed by principal component and cluster analyses. Geostatistics were used to check and quantify the degree of spatial dependence of properties, represented by principal components. The principal component analysis allowed a dimensional reduction of the problem, providing interpretable components, with little information loss. Despite the characteristic information loss of principal component analysis, the combination of this technique with geostatistical analysis was efficient for the quantification and determination of the structure of spatial dependence of soil fertility. In general, the availability of soil mineral nutrients was low and the levels of acidity and exchangeable Al were high.

  3. Integration of GIS, Geostatistics, and 3-D Technology to Assess the Spatial Distribution of Soil Moisture

    Science.gov (United States)

    Betts, M.; Tsegaye, T.; Tadesse, W.; Coleman, T. L.; Fahsi, A.

    1998-01-01

    The spatial and temporal distribution of near surface soil moisture is of fundamental importance to many physical, biological, biogeochemical, and hydrological processes. However, knowledge of these space-time dynamics and the processes which control them remains unclear. The integration of geographic information systems (GIS) and geostatistics together promise a simple mechanism to evaluate and display the spatial and temporal distribution of this vital hydrologic and physical variable. Therefore, this research demonstrates the use of geostatistics and GIS to predict and display soil moisture distribution under vegetated and non-vegetated plots. The research was conducted at the Winfred Thomas Agricultural Experiment Station (WTAES), Hazel Green, Alabama. Soil moisture measurement were done on a 10 by 10 m grid from tall fescue grass (GR), alfalfa (AA), bare rough (BR), and bare smooth (BS) plots. Results indicated that variance associated with soil moisture was higher for vegetated plots than non-vegetated plots. The presence of vegetation in general contributed to the spatial variability of soil moisture. Integration of geostatistics and GIS can improve the productivity of farm lands and the precision of farming.

  4. Topsoil moisture mapping using geostatistical techniques under different Mediterranean climatic conditions.

    Science.gov (United States)

    Martínez-Murillo, J F; Hueso-González, P; Ruiz-Sinoga, J D

    2017-10-01

    Soil mapping has been considered as an important factor in the widening of Soil Science and giving response to many different environmental questions. Geostatistical techniques, through kriging and co-kriging techniques, have made possible to improve the understanding of eco-geomorphologic variables, e.g., soil moisture. This study is focused on mapping of topsoil moisture using geostatistical techniques under different Mediterranean climatic conditions (humid, dry and semiarid) in three small watersheds and considering topography and soil properties as key factors. A Digital Elevation Model (DEM) with a resolution of 1×1m was derived from a topographical survey as well as soils were sampled to analyzed soil properties controlling topsoil moisture, which was measured during 4-years. Afterwards, some topography attributes were derived from the DEM, the soil properties analyzed in laboratory, and the topsoil moisture was modeled for the entire watersheds applying three geostatistical techniques: i) ordinary kriging; ii) co-kriging considering as co-variate topography attributes; and iii) co-kriging ta considering as co-variates topography attributes and gravel content. The results indicated topsoil moisture was more accurately mapped in the dry and semiarid watersheds when co-kriging procedure was performed. The study is a contribution to improve the efficiency and accuracy of studies about the Mediterranean eco-geomorphologic system and soil hydrology in field conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Geostatistical and multivariate statistical analysis of heavily and manifoldly contaminated soil samples.

    Science.gov (United States)

    Schaefer, Kristin; Einax, Jürgen W; Simeonov, Vasil; Tsakovski, Stefan

    2010-04-01

    The surroundings of the former Kremikovtzi steel mill near Sofia (Bulgaria) are influenced by various emissions from the factory. In addition to steel and alloys, they produce different products based on inorganic compounds in different smelters. Soil in this region is multiply contaminated. We collected 65 soil samples and analyzed 15 elements by different methods of atomic spectroscopy for a survey of this field site. Here we present a novel hybrid approach for environmental risk assessment of polluted soil combining geostatistical methods and source apportionment modeling. We could distinguish areas with heavily and slightly polluted soils in the vicinity of the iron smelter by applying unsupervised pattern recognition methods. This result was supported by geostatistical methods such as semivariogram analysis and kriging. The modes of action of the metals examined differ significantly in such a way that iron and lead account for the main pollutants of the iron smelter, whereas, e.g., arsenic shows a haphazard distribution. The application of factor analysis and source-apportionment modeling on absolute principal component scores revealed novel information about the composition of the emissions from the different stacks. It is possible to estimate the impact of every element examined on the pollution due to their emission source. This investigation allows an objective assessment of the different spatial distributions of the elements examined in the soil of the Kremikovtzi region. The geostatistical analysis illustrates this distribution and is supported by multivariate statistical analysis revealing relations between the elements.

  6. Geostatistical integration and uncertainty in pollutant concentration surface under preferential sampling

    Directory of Open Access Journals (Sweden)

    Laura Grisotto

    2016-04-01

    Full Text Available In this paper the focus is on environmental statistics, with the aim of estimating the concentration surface and related uncertainty of an air pollutant. We used air quality data recorded by a network of monitoring stations within a Bayesian framework to overcome difficulties in accounting for prediction uncertainty and to integrate information provided by deterministic models based on emissions meteorology and chemico-physical characteristics of the atmosphere. Several authors have proposed such integration, but all the proposed approaches rely on representativeness and completeness of existing air pollution monitoring networks. We considered the situation in which the spatial process of interest and the sampling locations are not independent. This is known in the literature as the preferential sampling problem, which if ignored in the analysis, can bias geostatistical inferences. We developed a Bayesian geostatistical model to account for preferential sampling with the main interest in statistical integration and uncertainty. We used PM10 data arising from the air quality network of the Environmental Protection Agency of Lombardy Region (Italy and numerical outputs from the deterministic model. We specified an inhomogeneous Poisson process for the sampling locations intensities and a shared spatial random component model for the dependence between the spatial location of monitors and the pollution surface. We found greater predicted standard deviation differences in areas not properly covered by the air quality network. In conclusion, in this context inferences on prediction uncertainty may be misleading when geostatistical modelling does not take into account preferential sampling.

  7. A conceptual sedimentological-geostatistical model of aquifer heterogeneity based on outcrop studies

    International Nuclear Information System (INIS)

    Davis, J.M.

    1994-01-01

    Three outcrop studies were conducted in deposits of different depositional environments. At each site, permeability measurements were obtained with an air-minipermeameter developed as part of this study. In addition, the geological units were mapped with either surveying, photographs, or both. Geostatistical analysis of the permeability data was performed to estimate the characteristics of the probability distribution function and the spatial correlation structure. The information obtained from the geological mapping was then compared with the results of the geostatistical analysis for any relationships that may exist. The main field site was located in the Albuquerque Basin of central New Mexico at an outcrop of the Pliocene-Pleistocene Sierra Ladrones Formation. The second study was conducted on the walls of waste pits in alluvial fan deposits at the Nevada Test Site. The third study was conducted on an outcrop of an eolian deposit (miocene) south of Socorro, New Mexico. The results of the three studies were then used to construct a conceptual model relating depositional environment to geostatistical models of heterogeneity. The model presented is largely qualitative but provides a basis for further hypothesis formulation and testing

  8. Protein loop modeling using a new hybrid energy function and its application to modeling in inaccurate structural environments.

    Directory of Open Access Journals (Sweden)

    Hahnbeom Park

    Full Text Available Protein loop modeling is a tool for predicting protein local structures of particular interest, providing opportunities for applications involving protein structure prediction and de novo protein design. Until recently, the majority of loop modeling methods have been developed and tested by reconstructing loops in frameworks of experimentally resolved structures. In many practical applications, however, the protein loops to be modeled are located in inaccurate structural environments. These include loops in model structures, low-resolution experimental structures, or experimental structures of different functional forms. Accordingly, discrepancies in the accuracy of the structural environment assumed in development of the method and that in practical applications present additional challenges to modern loop modeling methods. This study demonstrates a new strategy for employing a hybrid energy function combining physics-based and knowledge-based components to help tackle this challenge. The hybrid energy function is designed to combine the strengths of each energy component, simultaneously maintaining accurate loop structure prediction in a high-resolution framework structure and tolerating minor environmental errors in low-resolution structures. A loop modeling method based on global optimization of this new energy function is tested on loop targets situated in different levels of environmental errors, ranging from experimental structures to structures perturbed in backbone as well as side chains and template-based model structures. The new method performs comparably to force field-based approaches in loop reconstruction in crystal structures and better in loop prediction in inaccurate framework structures. This result suggests that higher-accuracy predictions would be possible for a broader range of applications. The web server for this method is available at http://galaxy.seoklab.org/loop with the PS2 option for the scoring function.

  9. Inaccurate weight perception is associated with extreme weight-management practices in U.S. high school students.

    Science.gov (United States)

    Ibrahim, Chadi; El-Kamary, Samer S; Bailey, Jason; St George, Diane M

    2014-03-01

    The objective of the present study was to examine whether adolescents' weight perception accuracy (WPA) was associated with extreme weight-management practices (EWPs) in differing body mass index (BMI) categories. WPA, overassessment, and underassessment were determined by comparing self-reported BMI and weight perception among U.S. high school students in the 2009 National Youth Risk Behavior Survey. BMI was classified as follows: underweight (perception were discordant. Overassessors thought they were heavier than they were (among underweight/healthy groups); underassessors thought they were lighter than they were (among healthy/overweight/obese groups). EWPs included ≥1 of fasting, use of diet pills, or purging/laxative use. Logit models were fitted for different BMI sex strata. In the final sample of 14,722 US high school students with complete data, 20.2%, 85.7%, 5.8%, and 80.9% of those who were underweight, healthy weight, overweight, and obese, inaccurately assessed their weight, respectively. In turn, 11.4% and 17.6% of accurate and inaccurate assessors engaged in EWPs, respectively. After adjustment, underweight girls who overassessed their weight had 12.6 times higher odds of EWPs (95% confidence interval 3.4-46.6). Moreover, there were elevated odds of EWPs among healthy weight students who overassessed their weight. Overassessing healthy weight students and underweight girls had higher odds of ≥1 EWPs, likely related to an unhealthy desire to lose weight. The present study demonstrates a need to further educate clinicians on WPA and its relation to EWPs even among those of healthy weight who may be seen as not at risk.

  10. Transsexual parenthood and new role assumptions.

    Science.gov (United States)

    Faccio, Elena; Bordin, Elena; Cipolletta, Sabrina

    2013-01-01

    This study explores the parental role of transsexuals and compares this to common assumptions about transsexuality and parentage. We conducted semi-structured interviews with 14 male-to-female transsexuals and 14 men, half parents and half non-parents, in order to explore four thematic areas: self-representation of the parental role, the description of the transsexual as a parent, the common representations of transsexuals as a parent, and male and female parental stereotypes. We conducted thematic and lexical analyses of the interviews using Taltac2 software. The results indicate that social representations of transsexuality and parenthood have a strong influence on processes of self-representation. Transsexual parents accurately understood conventional male and female parental prototypes and saw themselves as competent, responsible parents. They constructed their role based on affection toward the child rather than on the complementary role of their wives. In contrast, men's descriptions of transsexual parental roles were simpler and the descriptions of their parental role coincided with their personal experiences. These results suggest that the transsexual journey toward parenthood involves a high degree of re-adjustment, because their parental role does not coincide with a conventional one.

  11. Towards New Probabilistic Assumptions in Business Intelligence

    Directory of Open Access Journals (Sweden)

    Schumann Andrew

    2015-01-01

    Full Text Available One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot be observable and additive in principle. These variables can be called symbolic values or symbolic meanings and studied within symbolic interactionism, the theory developed since George Herbert Mead and Herbert Blumer. In statistical and econometric tools of business intelligence we accept only phenomena with causal connections measured by additive measures. In the paper we show that in the social world we deal with symbolic interactions which can be studied by non-additive labels (symbolic meanings or symbolic values. For accepting the variety of such phenomena we should avoid additivity of basic labels and construct a new probabilistic method in business intelligence based on non-Archimedean probabilities.

  12. Assumptions for the Annual Energy Outlook 1993

    International Nuclear Information System (INIS)

    1993-01-01

    This report is an auxiliary document to the Annual Energy Outlook 1993 (AEO) (DOE/EIA-0383(93)). It presents a detailed discussion of the assumptions underlying the forecasts in the AEO. The energy modeling system is an economic equilibrium system, with component demand modules representing end-use energy consumption by major end-use sector. Another set of modules represents petroleum, natural gas, coal, and electricity supply patterns and pricing. A separate module generates annual forecasts of important macroeconomic and industrial output variables. Interactions among these components of energy markets generate projections of prices and quantities for which energy supply equals energy demand. This equilibrium modeling system is referred to as the Intermediate Future Forecasting System (IFFS). The supply models in IFFS for oil, coal, natural gas, and electricity determine supply and price for each fuel depending upon consumption levels, while the demand models determine consumption depending upon end-use price. IFFS solves for market equilibrium for each fuel by balancing supply and demand to produce an energy balance in each forecast year

  13. Benchmarking a geostatistical procedure for the homogenisation of annual precipitation series

    Science.gov (United States)

    Caineta, Júlio; Ribeiro, Sara; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina

    2014-05-01

    The European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), has brought to attention the importance of establishing reliable homogenisation methods for climate data. In order to achieve that, a benchmark data set, containing monthly and daily temperature and precipitation data, was created to be used as a comparison basis for the effectiveness of those methods. Several contributions were submitted and evaluated by a number of performance metrics, validating the results against realistic inhomogeneous data. HOME also led to the development of new homogenisation software packages, which included feedback and lessons learned during the project. Preliminary studies have suggested a geostatistical stochastic approach, which uses Direct Sequential Simulation (DSS), as a promising methodology for the homogenisation of precipitation data series. Based on the spatial and temporal correlation between the neighbouring stations, DSS calculates local probability density functions at a candidate station to detect inhomogeneities. The purpose of the current study is to test and compare this geostatistical approach with the methods previously presented in the HOME project, using surrogate precipitation series from the HOME benchmark data set. The benchmark data set contains monthly precipitation surrogate series, from which annual precipitation data series were derived. These annual precipitation series were subject to exploratory analysis and to a thorough variography study. The geostatistical approach was then applied to the data set, based on different scenarios for the spatial continuity. Implementing this procedure also promoted the development of a computer program that aims to assist on the homogenisation of climate data, while minimising user interaction. Finally, in order to compare the effectiveness of this methodology with the homogenisation methods submitted during the HOME project, the obtained results

  14. Geostatistical methods for radiological evaluation and risk analysis of contaminated premises

    International Nuclear Information System (INIS)

    Desnoyers, Y.; Jeannee, N.; Chiles, J.P.; Dubot, D.

    2009-01-01

    Full text: At the end of process equipment dismantling, the complete decontamination of nuclear facilities requires the radiological assessment of residual activity levels of building structures. As stated by the IAEA, 'Segregation and characterization of contaminated materials are the key elements of waste minimization'. From this point of view, the set up of an appropriate evaluation methodology is of primordial importance. The radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical, functional and qualitative information. Then, a systematic (exhaustive or not) control of the emergent signal is performed by means of in situ measurement methods such as surface control device combined with in situ gamma spectrometry. Besides, in order to assess the contamination depth, samples can be collected from boreholes at several locations within the premises and analyzed. Combined with historical information and emergent signal maps, such data improve and reinforce the preliminary waste zoning. In order to provide reliable estimates while avoiding supplementary investigation costs, there is therefore a crucial need for sampling optimization methods together with appropriate data processing techniques. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. In this case, geostatistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, which are essential decision-making tools for decommissioning and dismantling projects of nuclear installations. Besides, the ability of this geostatistical framework to provide answers to several key issues that generally occur during the clean-up preparation phase is discussed: How to optimise the investigation costs? How to deal with data quality issues? How to consistently take into account auxiliary information such as historical

  15. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  16. 41 CFR 60-3.9 - No assumption of validity.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true No assumption of validity... assumption of validity. A. Unacceptable substitutes for evidence of validity. Under no circumstances will the... of it's validity be accepted in lieu of evidence of validity. Specifically ruled out are: assumptions...

  17. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, R.S.; Alonso, D.; McKane, A.J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the

  18. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    International Nuclear Information System (INIS)

    Wang, S; Chao, C; Chang, J

    2014-01-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as a detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect

  19. In meta-analyses of proportion studies, funnel plots were found to be an inaccurate method of assessing publication bias.

    Science.gov (United States)

    Hunter, James P; Saratzis, Athanasios; Sutton, Alex J; Boucher, Rebecca H; Sayers, Robert D; Bown, Matthew J

    2014-08-01

    To assess the utility of funnel plots in assessing publication bias (PB) in meta-analyses of proportion studies. Meta-analysis simulation study and meta-analysis of published literature reporting peri-operative mortality after abdominal aortic aneurysm (AAA) repair. Data for the simulation study were stochastically generated. A literature search of Medline and Embase was performed to identify studies for inclusion in the published literature meta-analyses. The simulation study demonstrated that conventionally constructed funnel plots (log odds vs. 1/standard error [1/SE]) for extreme proportional outcomes were asymmetric despite no PB. Alternative funnel plots constructed using study size rather than 1/SE showed no asymmetry for extreme proportional outcomes. When used in meta-analyses of the mortality of AAA repair, these alternative funnel plots highlighted the possibility for conventional funnel plots to demonstrate asymmetry when there was no evidence of PB. Conventional funnel plots used to assess for potential PB in meta-analyses are inaccurate for meta-analyses of proportion studies with low proportion outcomes. Funnel plots of study size against log odds may be a more accurate way of assessing for PB in these studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Conditioning geostatistical simulations of a heterogeneous paleo-fluvial bedrock aquifer using lithologs and pumping tests

    Science.gov (United States)

    Niazi, A.; Bentley, L. R.; Hayashi, M.

    2016-12-01

    Geostatistical simulations are used to construct heterogeneous aquifer models. Optimally, such simulations should be conditioned with both lithologic and hydraulic data. We introduce an approach to condition lithologic geostatistical simulations of a paleo-fluvial bedrock aquifer consisting of relatively high permeable sandstone channels embedded in relatively low permeable mudstone using hydraulic data. The hydraulic data consist of two-hour single well pumping tests extracted from the public water well database for a 250-km2 watershed in Alberta, Canada. First, lithologic models of the entire watershed are simulated and conditioned with hard lithological data using transition probability - Markov chain geostatistics (TPROGS). Then, a segment of the simulation around a pumping well is used to populate a flow model (FEFLOW) with either sand or mudstone. The values of the hydraulic conductivity and specific storage of sand and mudstone are then adjusted to minimize the difference between simulated and actual pumping test data using the parameter estimation program PEST. If the simulated pumping test data do not adequately match the measured data, the lithologic model is updated by locally deforming the lithology distribution using the probability perturbation method and the model parameters are again updated with PEST. This procedure is repeated until the simulated and measured data agree within a pre-determined tolerance. The procedure is repeated for each well that has pumping test data. The method creates a local groundwater model that honors both the lithologic model and pumping test data and provides estimates of hydraulic conductivity and specific storage. Eventually, the simulations will be integrated into a watershed-scale groundwater model.

  1. Conditioning geostatistical simulations of a bedrock fluvial aquifer using single well pumping tests

    Science.gov (United States)

    Niazi, A.; Bentley, L. R.; Hayashi, M.

    2015-12-01

    Geostatistical simulation is a powerful tool to explore the uncertainty associated with heterogeneity in groundwater and reservoir studies. Nonetheless, conditioning simulations merely with lithological information does not utilize all of the available information and so some workers additionally condition simulations with flow data. In this study, we introduce an approach to condition geostatistical simulations of the Paskapoo Formation, which is a paleo-fluvial system consisting of sandstone channels embedded in mudstone. The conditioning data consist of two-hour single well pumping tests extracted from the public water well database in Alberta, Canada. In this approach, lithologic models of an entire watershed are simulated and conditioned with hard lithological data using transition probability geostatistics (TPROGS). Then, a segment of the simulation around a pumping well was used to populate a flow model (FEFLOW) with either sand or mudstone. The values of the hydraulic conductivity and specific storage of sand and mudstone were then adjusted to minimize the difference between simulated and actual pumping test data using the parameter estimation program PEST. If the simulated data do not adequately match the measured data, the lithologic model is updated by locally deforming the lithology distribution using the probability perturbation method (PPM) and the model parameters are again updated with PEST. This procedure is repeated until the simulated and measured data agree within a pre-determined tolerance. The procedure is repeated for each pumping well that has pumping test data. The method constrains the lithological simulations and provides estimates of hydraulic conductivity and specific storage that are consistent with the pumping test data. Eventually, the simulations will be combined in watershed scale groundwater models.

  2. Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools

    Science.gov (United States)

    Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.

    2017-12-01

    The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.

  3. Evaluation of geostatistical parameters based on well tests; Estimation de parametres geostatistiques a partir de tests de puits

    Energy Technology Data Exchange (ETDEWEB)

    Gauthier, Y.

    1997-10-20

    Geostatistical tools are increasingly used to model permeability fields in subsurface reservoirs, which are considered as a particular random variable development depending of several geostatistical parameters such as variance and correlation length. The first part of the thesis is devoted to the study of relations existing between the transient well pressure (the well test) and the stochastic permeability field, using the apparent permeability concept.The well test performs a moving permeability average over larger and larger volume with increasing time. In the second part, the geostatistical parameters are evaluated using well test data; a Bayesian framework is used and parameters are estimated using the maximum likelihood principle by maximizing the well test data probability density function with respect to these parameters. This method, involving a well test fast evaluation, provides an estimation of the correlation length and the variance over different realizations of a two-dimensional permeability field

  4. Use of stratigraphic, petrographic, hydrogeologic and geochemical information for hydrogeologic modelling based on geostatistical simulation

    International Nuclear Information System (INIS)

    Rohlig, K.J.; Fischer, H.; Poltl, B.

    2004-01-01

    This paper describes the stepwise utilization of geologic information from various sources for the construction of hydrogeological models of a sedimentary site by means of geostatistical simulation. It presents a practical application of aquifer characterisation by firstly simulating hydrogeological units and then the hydrogeological parameters. Due to the availability of a large amount of hydrogeological, geophysical and other data and information, the Gorleben site (Northern Germany) has been used for a case study in order to demonstrate the approach. The study, which has not yet been completed, tries to incorporate as much as possible of the available information and to characterise the remaining uncertainties. (author)

  5. Geostatistical analysis of soil properties at field scale using standardized data

    Science.gov (United States)

    Millan, H.; Tarquis, A. M.; Pérez, L. D.; Matos, J.; González-Posada, M.

    2012-04-01

    Indentifying areas with physical degradation is a crucial step to ameliorate the effects in soil erosion. The quantification and interpretation of spatial variability is a key issue for site-specific soil management. Geostatistics has been the main methodological tool for implementing precision agriculture using field data collected at different spatial resolutions. Even though many works have made significant contributions to the body of knowledge on spatial statistics and its applications, some other key points need to be addressed for conducting precise comparisons between soil properties using geostatistical parameters. The objectives of the present work were (i) to quantify the spatial structure of different physical properties collected from a Vertisol, (ii) to search for potential correlations between different spatial patterns and (iii) to identify relevant components through multivariate spatial analysis. The study was conducted on a Vertisol (Typic Hapludert) dedicated to sugarcane (Saccharum officinarum L.) production during the last sixty years. We used six soil properties collected from a squared grid (225 points) (penetrometer resistance (PR), total porosity, fragmentation dimension (Df), vertical electrical conductivity (ECv), horizontal electrical conductivity (ECh) and soil water content (WC)). All the original data sets were z-transformed before geostatistical analysis. Three different types of semivariogram models were necessary for fitting individual experimental semivariograms. This suggests the different natures of spatial variability patterns. Soil water content rendered the largest nugget effect (C0 = 0.933) while soil total porosity showed the largest range of spatial correlation (A = 43.92 m). The bivariate geostatistical analysis also rendered significant cross-semivariance between different paired soil properties. However, four different semivariogram models were required in that case. This indicates an underlying co

  6. Geostatistical interpolation of field data in three dimensions to assess nitrate leaching to groundwater

    Science.gov (United States)

    Onsoy, S. Y.; Harter, T.; Ginn, T.; Hopmans, J. W.; Horwath, W.

    2003-04-01

    Groundwater deterioration and associated environmental problems induced by nitrate applications in agricultural areas are a growing concern worldwide. Estimation of downward nitrate flux to groundwater is a major concern due to its extensive use, high mobility and persistence in the environment. The overall goal of our research is to better understand the role of deep alluvial unsaturated zone in controlling the long term impact of California Central Valley agricultural practices on groundwater quality. We describe the quantitative analysis of the subsurface N budget in a 16 m thick vadose zone utilizing the data obtained from the 12-year N fertilizer experiment (1982-1995) conducted on an alluvial fan of the Kings River in the Central Valley of California. Three alternative N management practices with an annual fertilizer rate of 0, 100 and 325 lbs N/ac are compared, by high-resolution sampling of the vadose zone 12 years after initiation of the applications. Here we report on spatial statistics of the collected data on water content, nitrate, etc. The results from directional experimental semivariograms are rather striking. Nitrate data, while widely variable, have a significant spatial continuity in the vertical direction while the sill is identical both in the horizontal and vertical directions. In the case of soil moisture data, however, the sill in the horizontal direction is smaller while the range is significantly longer than in the vertical direction, indicating that moisture flux is predominantly vertical despite the strong stratigraphic heterogeneity. The results from the variogram analysis are incorporated in the kriging interpolation to analyze the effects of spatial structure of the data in the estimation of N mass. We employ two methods to estimate the risk of nitrate loss from the root zone: mass balance of the N fluxes in the root zone and deep vadose zone N mass assessment via geostatistical analysis. From both methods we obtain excess N available

  7. Geostatistical radar-raingauge combination with nonparametric correlograms: methodological considerations and application in Switzerland

    Directory of Open Access Journals (Sweden)

    R. Schiemann

    2011-05-01

    Full Text Available Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications.

    Various formulations of geostatistical combination (Kriging methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages.

    Furthermore, two variants of Kriging with external drift (KED are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain. The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases

  8. Geostatistical analysis of the flood risk perception queries in the village of Navaluenga (Central Spain)

    Science.gov (United States)

    Guardiola-Albert, Carolina; Díez-Herrero, Andrés; Amérigo, María; García, Juan Antonio; María Bodoque, José; Fernández-Naranjo, Nuria

    2017-04-01

    Flash floods provoke a high average mortality as they are usually unexpected events which evolve rapidly and affect relatively small areas. The short time available for minimizing risks requires preparedness and response actions to be put into practice. Therefore, it is necessary the development of emergency response plans to evacuate and rescue people in the context of a flash-flood hazard. In this framework, risk management has to integrate the social dimension of flash-flooding and its spatial distribution by understanding the characteristics of local communities in order to enhance community resilience during a flash-flood. In this regard, the flash-flood social risk perception of the village of Navaluenga (Central Spain) has been recently assessed, as well as the level of awareness of civil protection and emergency management strategies (Bodoque et al., 2016). This has been done interviewing 254 adults, representing roughly 12% of the population census. The present study wants to go further in the analysis of the resulting questionnaires, incorporating in the analysis the location of home spatial coordinates in order to characterize the spatial distribution and possible geographical interpretation of flood risk perception. We apply geostatistical methods to analyze spatial relations of social risk perception and level of awareness with distance to the rivers (Alberche and Chorrerón) or to the flood-prone areas (50-year, 100-year and 500-year flood plains). We want to discover spatial patterns, if any, using correlation functions (variograms). Geostatistical analyses results can help to either confirm the logical pattern (i.e., less awareness further to the rivers or high return period of flooding) or reveal departures from expected. It can also be possible to identify hot spots, cold spots, and spatial outliers. The interpretation of these spatial patterns can give valuable information to define strategies to improve the awareness regarding preparedness and

  9. Local conservation scores without a priori assumptions on neutral substitution rates.

    Science.gov (United States)

    Dingel, Janis; Hanus, Pavol; Leonardi, Niccolò; Hagenauer, Joachim; Zech, Jürgen; Mueller, Jakob C

    2008-04-11

    Comparative genomics aims to detect signals of evolutionary conservation as an indicator of functional constraint. Surprisingly, results of the ENCODE project revealed that about half of the experimentally verified functional elements found in non-coding DNA were classified as unconstrained by computational predictions. Following this observation, it has been hypothesized that this may be partly explained by biased estimates on neutral evolutionary rates used by existing sequence conservation metrics. All methods we are aware of rely on a comparison with the neutral rate and conservation is estimated by measuring the deviation of a particular genomic region from this rate. Consequently, it is a reasonable assumption that inaccurate neutral rate estimates may lead to biased conservation and constraint estimates. We propose a conservation signal that is produced by local Maximum Likelihood estimation of evolutionary parameters using an optimized sliding window and present a Kullback-Leibler projection that allows multiple different estimated parameters to be transformed into a conservation measure. This conservation measure does not rely on assumptions about neutral evolutionary substitution rates and little a priori assumptions on the properties of the conserved regions are imposed. We show the accuracy of our approach (KuLCons) on synthetic data and compare it to the scores generated by state-of-the-art methods (phastCons, GERP, SCONE) in an ENCODE region. We find that KuLCons is most often in agreement with the conservation/constraint signatures detected by GERP and SCONE while qualitatively very different patterns from phastCons are observed. Opposed to standard methods KuLCons can be extended to more complex evolutionary models, e.g. taking insertion and deletion events into account and corresponding results show that scores obtained under this model can diverge significantly from scores using the simpler model. Our results suggest that discriminating among the

  10. Local conservation scores without a priori assumptions on neutral substitution rates

    Directory of Open Access Journals (Sweden)

    Hagenauer Joachim

    2008-04-01

    Full Text Available Abstract Background Comparative genomics aims to detect signals of evolutionary conservation as an indicator of functional constraint. Surprisingly, results of the ENCODE project revealed that about half of the experimentally verified functional elements found in non-coding DNA were classified as unconstrained by computational predictions. Following this observation, it has been hypothesized that this may be partly explained by biased estimates on neutral evolutionary rates used by existing sequence conservation metrics. All methods we are aware of rely on a comparison with the neutral rate and conservation is estimated by measuring the deviation of a particular genomic region from this rate. Consequently, it is a reasonable assumption that inaccurate neutral rate estimates may lead to biased conservation and constraint estimates. Results We propose a conservation signal that is produced by local Maximum Likelihood estimation of evolutionary parameters using an optimized sliding window and present a Kullback-Leibler projection that allows multiple different estimated parameters to be transformed into a conservation measure. This conservation measure does not rely on assumptions about neutral evolutionary substitution rates and little a priori assumptions on the properties of the conserved regions are imposed. We show the accuracy of our approach (KuLCons on synthetic data and compare it to the scores generated by state-of-the-art methods (phastCons, GERP, SCONE in an ENCODE region. We find that KuLCons is most often in agreement with the conservation/constraint signatures detected by GERP and SCONE while qualitatively very different patterns from phastCons are observed. Opposed to standard methods KuLCons can be extended to more complex evolutionary models, e.g. taking insertion and deletion events into account and corresponding results show that scores obtained under this model can diverge significantly from scores using the simpler model

  11. Accounting for regional background and population size in the detection of spatial clusters and outliers using geostatistical filtering and spatial neutral models: the case of lung cancer in Long Island, New York

    Directory of Open Access Journals (Sweden)

    Goovaerts Pierre

    2004-07-01

    Full Text Available Abstract Background Complete Spatial Randomness (CSR is the null hypothesis employed by many statistical tests for spatial pattern, such as local cluster or boundary analysis. CSR is however not a relevant null hypothesis for highly complex and organized systems such as those encountered in the environmental and health sciences in which underlying spatial pattern is present. This paper presents a geostatistical approach to filter the noise caused by spatially varying population size and to generate spatially correlated neutral models that account for regional background obtained by geostatistical smoothing of observed mortality rates. These neutral models were used in conjunction with the local Moran statistics to identify spatial clusters and outliers in the geographical distribution of male and female lung cancer in Nassau, Queens, and Suffolk counties, New York, USA. Results We developed a typology of neutral models that progressively relaxes the assumptions of null hypotheses, allowing for the presence of spatial autocorrelation, non-uniform risk, and incorporation of spatially heterogeneous population sizes. Incorporation of spatial autocorrelation led to fewer significant ZIP codes than found in previous studies, confirming earlier claims that CSR can lead to over-identification of the number of significant spatial clusters or outliers. Accounting for population size through geostatistical filtering increased the size of clusters while removing most of the spatial outliers. Integration of regional background into the neutral models yielded substantially different spatial clusters and outliers, leading to the identification of ZIP codes where SMR values significantly depart from their regional background. Conclusion The approach presented in this paper enables researchers to assess geographic relationships using appropriate null hypotheses that account for the background variation extant in real-world systems. In particular, this new

  12. An Alternative Flight Software Trigger Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions Using Inaccurate or Scarce Information

    Science.gov (United States)

    Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.

  13. An Alternative Flight Software Paradigm: Applying Multivariate Logistic Regression to Sense Trigger Conditions using Inaccurate or Scarce Information

    Science.gov (United States)

    Smith, Kelly; Gay, Robert; Stachowiak, Susan

    2013-01-01

    In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles

  14. Evaluation of statistical and geostatistical models of digital soil properties mapping in tropical mountain regions

    Directory of Open Access Journals (Sweden)

    Waldir de Carvalho Junior

    2014-06-01

    Full Text Available Soil properties have an enormous impact on economic and environmental aspects of agricultural production. Quantitative relationships between soil properties and the factors that influence their variability are the basis of digital soil mapping. The predictive models of soil properties evaluated in this work are statistical (multiple linear regression-MLR and geostatistical (ordinary kriging and co-kriging. The study was conducted in the municipality of Bom Jardim, RJ, using a soil database with 208 sampling points. Predictive models were evaluated for sand, silt and clay fractions, pH in water and organic carbon at six depths according to the specifications of the consortium of digital soil mapping at the global level (GlobalSoilMap. Continuous covariates and categorical predictors were used and their contributions to the model assessed. Only the environmental covariates elevation, aspect, stream power index (SPI, soil wetness index (SWI, normalized difference vegetation index (NDVI, and b3/b2 band ratio were significantly correlated with soil properties. The predictive models had a mean coefficient of determination of 0.21. Best results were obtained with the geostatistical predictive models, where the highest coefficient of determination 0.43 was associated with sand properties between 60 to 100 cm deep. The use of a sparse data set of soil properties for digital mapping can explain only part of the spatial variation of these properties. The results may be related to the sampling density and the quantity and quality of the environmental covariates and predictive models used.

  15. Characterization of groundwater quality using water evaluation indices, multivariate statistics and geostatistics in central Bangladesh

    Directory of Open Access Journals (Sweden)

    Md. Bodrud-Doza

    2016-04-01

    Full Text Available This study investigates the groundwater quality in the Faridpur district of central Bangladesh based on preselected 60 sample points. Water evaluation indices and a number of statistical approaches such as multivariate statistics and geostatistics are applied to characterize water quality, which is a major factor for controlling the groundwater quality in term of drinking purposes. The study reveal that EC, TDS, Ca2+, total As and Fe values of groundwater samples exceeded Bangladesh and international standards. Ground water quality index (GWQI exhibited that about 47% of the samples were belonging to good quality water for drinking purposes. The heavy metal pollution index (HPI, degree of contamination (Cd, heavy metal evaluation index (HEI reveal that most of the samples belong to low level of pollution. However, Cd provide better alternative than other indices. Principle component analysis (PCA suggests that groundwater quality is mainly related to geogenic (rock–water interaction and anthropogenic source (agrogenic and domestic sewage in the study area. Subsequently, the findings of cluster analysis (CA and correlation matrix (CM are also consistent with the PCA results. The spatial distributions of groundwater quality parameters are determined by geostatistical modeling. The exponential semivariagram model is validated as the best fitted models for most of the indices values. It is expected that outcomes of the study will provide insights for decision makers taking proper measures for groundwater quality management in central Bangladesh.

  16. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    Science.gov (United States)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  17. Bridges between multiple-point geostatistics and texture synthesis: Review and guidelines for future research

    Science.gov (United States)

    Mariethoz, Gregoire; Lefebvre, Sylvain

    2014-05-01

    Multiple-Point Simulations (MPS) is a family of geostatistical tools that has received a lot of attention in recent years for the characterization of spatial phenomena in geosciences. It relies on the definition of training images to represent a given type of spatial variability, or texture. We show that the algorithmic tools used are similar in many ways to techniques developed in computer graphics, where there is a need to generate large amounts of realistic textures for applications such as video games and animated movies. Similarly to MPS, these texture synthesis methods use training images, or exemplars, to generate realistic-looking graphical textures. Both domains of multiple-point geostatistics and example-based texture synthesis present similarities in their historic development and share similar concepts. These disciplines have however remained separated, and as a result significant algorithmic innovations in each discipline have not been universally adopted. Texture synthesis algorithms present drastically increased computational efficiency, patterns reproduction and user control. At the same time, MPS developed ways to condition models to spatial data and to produce 3D stochastic realizations, which have not been thoroughly investigated in the field of texture synthesis. In this paper we review the possible links between these disciplines and show the potential and limitations of using concepts and approaches from texture synthesis in MPS. We also provide guidelines on how recent developments could benefit both fields of research, and what challenges remain open.

  18. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com [Centre of Preparatory and General Studies, TATI University College, 24000 Kemaman, Terengganu, Malaysia and Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Yusof, Fadhilah, E-mail: fadhilahy@utm.my [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Daud, Zalina Mohd, E-mail: zalina@ic.utm.my [UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia); Yusop, Zulkifli, E-mail: zulyusop@utm.my [Institute of Environmental and Water Resource Management (IPASA), Faculty of Civil Engineering, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor (Malaysia); Kasno, Mohammad Afif, E-mail: mafifkasno@gmail.com [Malaysia - Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, UTM KL, 54100 Kuala Lumpur (Malaysia)

    2015-02-03

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  19. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    International Nuclear Information System (INIS)

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-01-01

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system

  20. Demonstration of a geostatistical approach to physically consistent downscaling of climate modeling simulations

    KAUST Repository

    Jha, Sanjeev Kumar

    2013-01-01

    A downscaling approach based on multiple-point geostatistics (MPS) is presented. The key concept underlying MPS is to sample spatial patterns from within training images, which can then be used in characterizing the relationship between different variables across multiple scales. The approach is used here to downscale climate variables including skin surface temperature (TSK), soil moisture (SMOIS), and latent heat flux (LH). The performance of the approach is assessed by applying it to data derived from a regional climate model of the Murray-Darling basin in southeast Australia, using model outputs at two spatial resolutions of 50 and 10 km. The data used in this study cover the period from 1985 to 2006, with 1985 to 2005 used for generating the training images that define the relationships of the variables across the different spatial scales. Subsequently, the spatial distributions for the variables in the year 2006 are determined at 10 km resolution using the 50 km resolution data as input. The MPS geostatistical downscaling approach reproduces the spatial distribution of TSK, SMOIS, and LH at 10 km resolution with the correct spatial patterns over different seasons, while providing uncertainty estimates through the use of multiple realizations. The technique has the potential to not only bridge issues of spatial resolution in regional and global climate model simulations but also in feature sharpening in remote sensing applications through image fusion, filling gaps in spatial data, evaluating downscaled variables with available remote sensing images, and aggregating/disaggregating hydrological and groundwater variables for catchment studies.

  1. Geostatistics: a common link between medical geography, mathematical geology, and medical geology.

    Science.gov (United States)

    Goovaerts, P

    2014-08-01

    Since its development in the mining industry, geostatistics has emerged as the primary tool for spatial data analysis in various fields, ranging from earth and atmospheric sciences to agriculture, soil science, remote sensing, and more recently environmental exposure assessment. In the last few years, these tools have been tailored to the field of medical geography or spatial epidemiology, which is concerned with the study of spatial patterns of disease incidence and mortality and the identification of potential 'causes' of disease, such as environmental exposure, diet and unhealthy behaviours, economic or socio-demographic factors. On the other hand, medical geology is an emerging interdisciplinary scientific field studying the relationship between natural geological factors and their effects on human and animal health. This paper provides an introduction to the field of medical geology with an overview of geostatistical methods available for the analysis of geological and health data. Key concepts are illustrated using the mapping of groundwater arsenic concentration across eleven Michigan counties and the exploration of its relationship to the incidence of prostate cancer at the township level.

  2. Downscaling remotely sensed imagery using area-to-point cokriging and multiple-point geostatistical simulation

    Science.gov (United States)

    Tang, Yunwei; Atkinson, Peter M.; Zhang, Jingxiong

    2015-03-01

    A cross-scale data integration method was developed and tested based on the theory of geostatistics and multiple-point geostatistics (MPG). The goal was to downscale remotely sensed images while retaining spatial structure by integrating images at different spatial resolutions. During the process of downscaling, a rich spatial correlation model in the form of a training image was incorporated to facilitate reproduction of similar local patterns in the simulated images. Area-to-point cokriging (ATPCK) was used as locally varying mean (LVM) (i.e., soft data) to deal with the change of support problem (COSP) for cross-scale integration, which MPG cannot achieve alone. Several pairs of spectral bands of remotely sensed images were tested for integration within different cross-scale case studies. The experiment shows that MPG can restore the spatial structure of the image at a fine spatial resolution given the training image and conditioning data. The super-resolution image can be predicted using the proposed method, which cannot be realised using most data integration methods. The results show that ATPCK-MPG approach can achieve greater accuracy than methods which do not account for the change of support issue.

  3. Redesigning rain gauges network in Johor using geostatistics and simulated annealing

    Science.gov (United States)

    Aziz, Mohd Khairul Bazli Mohd; Yusof, Fadhilah; Daud, Zalina Mohd; Yusop, Zulkifli; Kasno, Mohammad Afif

    2015-02-01

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during the monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.

  4. Mapping monthly rainfall data in Galicia (NW Spain using inverse distances and geostatistical methods

    Directory of Open Access Journals (Sweden)

    P. Sande-Fouz

    2007-04-01

    Full Text Available In this paper, results from three different interpolation techniques based on Geostatistics (ordinary kriging, kriging with external drift and conditional simulation and one deterministic method (inverse distances for mapping total monthly rainfall are compared. The study data set comprised total monthly rainfall from 1998 till 2001 corresponding to a maximum of 121 meteorological stations irregularly distributed in the region of Galicia (NW Spain. Furthermore, a raster Geographic Information System (GIS was used for spatial interpolation with a 500×500 m grid digital elevation model. Inverse distance technique was appropriate for a rapid estimation of the rainfall at the studied scale. In order to apply geostatistical interpolation techniques, a spatial dependence analysis was performed; rainfall spatial dependence was observed in 33 out of 48 months analysed, the rest of the rainfall data sets presented a random behaviour. Different values of the semivariogram parameters caused the smoothing in the maps obtained by ordinary kriging. Kriging with external drift results were according to former studies which showed the influence of topography. Conditional simulation is considered to give more realistic results; however, this consideration must be confirmed with new data.

  5. Legal assumptions for private company claim for additional (supplementary payment

    Directory of Open Access Journals (Sweden)

    Šogorov Stevan

    2011-01-01

    Full Text Available Subject matter of analyze in this article are legal assumptions which must be met in order to enable private company to call for additional payment. After introductory remarks discussion is focused on existence of provisions regarding additional payment in formation contract, or in shareholders meeting general resolution, as starting point for company's claim. Second assumption is concrete resolution of shareholders meeting which creates individual obligations for additional payments. Third assumption is defined as distinctness regarding sum of payment and due date. Sending of claim by relevant company body is set as fourth legal assumption for realization of company's right to claim additional payments from member of private company.

  6. Monte Carlo full-waveform inversion of crosshole GPR data using multiple-point geostatistical a priori information

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2012-01-01

    We present a general Monte Carlo full-waveform inversion strategy that integrates a priori information described by geostatistical algorithms with Bayesian inverse problem theory. The extended Metropolis algorithm can be used to sample the a posteriori probability density of highly nonlinear inve...

  7. 40 CFR 264.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... FACILITIES Financial Requirements § 264.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure, post-closure care, or... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  8. 40 CFR 261.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... Excluded Hazardous Secondary Materials § 261.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure or liability... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  9. 40 CFR 265.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ..., STORAGE, AND DISPOSAL FACILITIES Financial Requirements § 265.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  10. 40 CFR 144.66 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) UNDERGROUND INJECTION CONTROL PROGRAM Financial Responsibility: Class I Hazardous Waste Injection Wells § 144.66 State assumption of responsibility. (a) If a State either assumes legal... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State assumption of responsibility...

  11. 40 CFR 267.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... STANDARDIZED PERMIT Financial Requirements § 267.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure care or liability... 40 Protection of Environment 26 2010-07-01 2010-07-01 false State assumption of responsibility...

  12. 40 CFR 761.2 - PCB concentration assumptions for use.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false PCB concentration assumptions for use..., AND USE PROHIBITIONS General § 761.2 PCB concentration assumptions for use. (a)(1) Any person may..., oil-filled cable, and rectifiers whose PCB concentration is not established contain PCBs at < 50 ppm...

  13. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    We propose a class of finite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the ...

  14. Basic assumptions in statistical analyses of data in biomedical ...

    African Journals Online (AJOL)

    If one or more assumptions are violated, an alternative procedure must be used to obtain valid results. This article aims at highlighting some basic assumptions in statistical analyses of data in biomedical sciences. Keywords: samples, independence, non-parametric, parametric, statistical analyses. Int. J. Biol. Chem. Sci. Vol.

  15. 29 CFR 1607.9 - No assumption of validity.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false No assumption of validity. 1607.9 Section 1607.9 Labor... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.9 No assumption of validity. A. Unacceptable substitutes for evidence of validity. Under no circumstances will the general reputation of a test or other...

  16. PFP issues/assumptions development and management planning guide

    International Nuclear Information System (INIS)

    SINCLAIR, J.C.

    1999-01-01

    The PFP Issues/Assumptions Development and Management Planning Guide presents the strategy and process used for the identification, allocation, and maintenance of an Issues/Assumptions Management List for the Plutonium Finishing Plant (PFP) integrated project baseline. Revisions to this document will include, as attachments, the most recent version of the Issues/Assumptions Management List, both open and current issues/assumptions (Appendix A), and closed or historical issues/assumptions (Appendix B). This document is intended be a Project-owned management tool. As such, this document will periodically require revisions resulting from improvements of the information, processes, and techniques as now described. Revisions that suggest improved processes will only require PFP management approval

  17. Hungarian contribution to the Global Soil Organic Carbon Map (GSOC17) using advanced machine learning algorithms and geostatistics

    Science.gov (United States)

    Szatmári, Gábor; Laborczi, Annamária; Takács, Katalin; Pásztor, László

    2017-04-01

    The knowledge about soil organic carbon (SOC) baselines and changes, and the detection of vulnerable hot spots for SOC losses and gains under climate change and changed land management is still fairly limited. Thus Global Soil Partnership (GSP) has been requested to develop a global SOC mapping campaign by 2017. GSPs concept builds on official national data sets, therefore, a bottom-up (country-driven) approach is pursued. The elaborated Hungarian methodology suits the general specifications of GSOC17 provided by GSP. The input data for GSOC17@HU mapping approach has involved legacy soil data bases, as well as proper environmental covariates related to the main soil forming factors, such as climate, organisms, relief and parent material. Nowadays, digital soil mapping (DSM) highly relies on the assumption that soil properties of interest can be modelled as a sum of a deterministic and stochastic component, which can be treated and modelled separately. We also adopted this assumption in our methodology. In practice, multiple regression techniques are commonly used to model the deterministic part. However, this global (and usually linear) models commonly oversimplify the often complex and non-linear relationship, which has a crucial effect on the resulted soil maps. Thus, we integrated machine learning algorithms (namely random forest and quantile regression forest) in the elaborated methodology, supposing then to be more suitable for the problem in hand. This approach has enable us to model the GSOC17 soil properties in that complex and non-linear forms as the soil itself. Furthermore, it has enable us to model and assess the uncertainty of the results, which is highly relevant in decision making. The applied methodology has used geostatistical approach to model the stochastic part of the spatial variability of the soil properties of interest. We created GSOC17@HU map with 1 km grid resolution according to the GSPs specifications. The map contributes to the GSPs

  18. Local Geostatistical Models and Big Data in Hydrological and Ecological Applications

    Science.gov (United States)

    Hristopulos, Dionissios

    2015-04-01

    The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property

  19. A geostatistical method applied to the geochemical study of the Chichinautzin Volcanic Field in Mexico

    Science.gov (United States)

    Robidoux, P.; Roberge, J.; Urbina Oviedo, C. A.

    2011-12-01

    The origin of magmatism and the role of the subducted Coco's Plate in the Chichinautzin volcanic field (CVF), Mexico is still a subject of debate. It has been established that mafic magmas of alkali type (subduction) and calc-alkali type (OIB) are produced in the CVF and both groups cannot be related by simple fractional crystallization. Therefore, many geochemical studies have been done, and many models have been proposed. The main goal of the work present here is to provide a new tool for the visualization and interpretation of geochemical data using geostatistics and geospatial analysis techniques. It contains a complete geodatabase built from referred samples over the 2500 km2 area of CVF and its neighbour stratovolcanoes (Popocatepetl, Iztaccihuatl and Nevado de Toluca). From this database, map of different geochemical markers were done to visualise geochemical signature in a geographical manner, to test the statistic distribution with a cartographic technique and highlight any spatial correlations. The distribution and regionalization of the geochemical signatures can be viewed in a two-dimensional space using a specific spatial analysis tools from a Geographic Information System (GIS). The model of spatial distribution is tested with Linear Decrease (LD) and Inverse Distance Weight (IDW) interpolation technique because they best represent the geostatistical characteristics of the geodatabase. We found that ratio of Ba/Nb, Nb/Ta, Th/Nb show first order tendency, which means visible spatial variation over a large scale area. Monogenetic volcanoes in the center of the CVF have distinct values compare to those of the Popocatepetl-Iztaccihuatl polygenetic complex which are spatially well defined. Inside the Valley of Mexico, a large quantity of monogenetic cone in the eastern portion of CVF has ratios similar to the Iztaccihuatl and Popocatepetl complex. Other ratios like alkalis vs SiO2, V/Ti, La/Yb, Zr/Y show different spatial tendencies. In that case, second

  20. Geostatistical Evaluation of Spring Water Quality in an Urbanizing Carbonate Aquifer

    Science.gov (United States)

    McGinty, A.; Welty, C.

    2003-04-01

    As part of an investigation of the impacts of urbanization on the hydrology and ecology of Valley Creek watershed near Philadelphia, Pennsylvania, we have analyzed the chemical composition of 110 springs to assess the relative influence of geology and anthropogenic activities on water quality. The 60 km^2 watershed is underlain by productive fractured rock aquifers composed of Cambrian and Ordovician carbonate rocks in the central valley and Cambrian crystalline and siliciclastic rocks (quartzite and phyllite) in the north and south hills that border the valley. All tributaries of the surface water system originate in the crystalline and siliciclastic hills. The watershed is covered by 17% impervious area and contains 6 major hazardous waste sites, one active quarrying operation and one golf course; 25% of the area utilizes septic systems for sewage disposal. We identified 172 springs, 110 of which had measurable flow rates ranging from 0.002 to 5 l/s. The mapped surficial geology appears as an anisotropic pattern, with long bands of rock formations paralleling the geographic orientation of the valley. Mapped development appears as a more isotropic pattern, characterized by isolated patches of land use that are not coincident with the evident geologic pattern. Superimposed upon these characteristics is a dense array of depressions and shallow sinkholes in the carbonate rocks, and a system of major faults at several formation contacts. We used indicator geostatistics to quantitatively characterize the spatial extent of the major geologic formations and patterns of land use. Maximum correlation scales for the rock types corresponded with strike direction and ranged from 1000 to 3000 m. Anisotropy ratios ranged from 2 to 4. Land-use correlation scales were generally smaller (200 to 500 m) with anisotropy ratios of around 1.2, i.e., nearly isotropic as predicted. Geostatistical analysis of spring water quality parameters related to geology (pH, specific conductance

  1. Assumptions and Policy Decisions for Vital Area Identification Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myungsu; Bae, Yeon-Kyoung; Lee, Youngseung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    U.S. Nuclear Regulatory Commission and IAEA guidance indicate that certain assumptions and policy questions should be addressed to a Vital Area Identification (VAI) process. Korea Hydro and Nuclear Power conducted a VAI based on current Design Basis Threat and engineering judgement to identify APR1400 vital areas. Some of the assumptions were inherited from Probabilistic Safety Assessment (PSA) as a sabotage logic model was based on PSA logic tree and equipment location data. This paper illustrates some important assumptions and policy decisions for APR1400 VAI analysis. Assumptions and policy decisions could be overlooked at the beginning stage of VAI, however they should be carefully reviewed and discussed among engineers, plant operators, and regulators. Through APR1400 VAI process, some of the policy concerns and assumptions for analysis were applied based on document research and expert panel discussions. It was also found that there are more assumptions to define for further studies for other types of nuclear power plants. One of the assumptions is mission time, which was inherited from PSA.

  2. Application of Geostatistics to the resolution of structural problems in homogeneous rocky massifs

    International Nuclear Information System (INIS)

    Lucero Michaut, H.N.

    1985-01-01

    The nature and possibilities of application of intrinsic functions to the structural research and the delimitation of the areas of influence in an ore deposit are briefly described. Main models to which the different distributions may be assimilated: 'logarithmic' and 'linear' among those with no sill value, and on the other hand, 'spherical', 'exponential' and 'gaussian' among those having a sill level, which allows the establishment of a range value liable to separate the field of independent samples from that of non-independent ones are shown. Thereafter as an original contribution to applied geostatistics the autor postulates 1) the application of the 'fracturing rank' as a regionalized variable after verifying its validity through strict probabilistic methodologies, and 2) a methodological extension of the conventional criterion of 'rock quality designation' to the analysis of the quality and degree of structural discontinuity in the rock surface. Finally, some examples are given of these applications. (M.E.L.) [es

  3. Geostatistical exploration of dataset assessing the heavy metal contamination in Ewekoro limestone, Southwestern Nigeria

    Directory of Open Access Journals (Sweden)

    Kehinde D. Oyeyemi

    2017-10-01

    Full Text Available The dataset for this article contains geostatistical analysis of heavy metals contamination from limestone samples collected from Ewekoro Formation in the eastern Dahomey basin, Ogun State Nigeria. The samples were manually collected and analysed using Microwave Plasma Atomic Absorption Spectrometer (MPAS. Analysis of the twenty different samples showed different levels of heavy metals concentration. The analysed nine elements are Arsenic, Mercury, Cadmium, Cobalt, Chromium, Nickel, Lead, Vanadium and Zinc. Descriptive statistics was used to explore the heavy metal concentrations individually. Pearson, Kendall tau and Spearman rho correlation coefficients was used to establish the relationships among the elements and the analysis of variance showed that there is a significant difference in the mean distribution of the heavy metals concentration within and between the groups of the 20 samples analysed. The dataset can provide insights into the health implications of the contaminants especially when the mean concentration levels of the heavy metals are compared with recommended regulatory limit concentration.

  4. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    International Nuclear Information System (INIS)

    Li Yupeng; Deutsch, Clayton V.

    2012-01-01

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  5. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    International Nuclear Information System (INIS)

    R.E. Sweeney

    2001-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  6. Monitored Geologic Repository Life Cycle Cost Estimate Assumptions Document

    International Nuclear Information System (INIS)

    Sweeney, R.

    2000-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost estimate and schedule update incorporating information from the Viability Assessment (VA), License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  7. Geostatistical analysis of soil gas data in a high seismic intermontane basin: Fucino Plain, central Italy

    Science.gov (United States)

    Ciotoli, G.; Lombardi, S.; Annunziatellis, A.

    2007-05-01

    Numerous soil gas measurements of four gaseous species with very different geochemical behaviors were performed in the Fucino Basin, an area characterized by known and inferred structural discontinuities. A comprehensive statistical and geostatistical treatment of these data followed in order to provide insight into the spatial influence of tectonic discontinuities and geology on deep-seated gas migration toward the surface. The results yielded anomalies with different features, reflecting the different gas-bearing properties of the eastern seismogenic faults related to the 1915 earthquake (Mb = 7.0) and the hidden structural features occurring in the western side of the plain. In particular, this approach demonstrates that soil gas concentration (i.e., Rn and CO2) can identify the simpler normal faults of the eastern sector of the plain. In contrast, the more pervasive fracturing and faulting, as well as the occurrence of coarser deposits, on the western side of the area, make the location of faults less clear. The results show that gases migrate preferentially through zones of brittle deformation by advective processes, as suggested by the relatively high rate of migration needed to obtain anomalies of short-lived 222Rn in the soil pores. Furthermore, a geostatistical study of soil gas data was conducted to quantify the spatial domain of correlation and the gas-bearing properties of faults on the basis of shallow soil gas distribution (i.e., anisotropic behavior). The results provide a clear correlation between the shape and orientation of the anomalies and the different geometry of the faults recognized in the plain.

  8. Geostatistics – a tool applied to the distribution of Legionella pneumophila in a hospital water system

    Directory of Open Access Journals (Sweden)

    Pasqualina Laganà

    2015-12-01

    Full Text Available [b]Introduction.[/b] Legionnaires’ disease is normally acquired by inhalation of legionellae from a contaminated environmental source. Water systems of large buildings, such as hospitals, are often contaminated with legionellae and therefore represent a potential risk for the hospital population. The aim of this study was to evaluate the potential contamination of [i]Legionella pneumophila[/i] (LP in a large hospital in Italy through georeferential statistical analysis to assess the possible sources of dispersion and, consequently, the risk of exposure for both health care staff and patients. [b]Materials and Method. [/b]LP serogroups 1 and 2–14 distribution was considered in the wards housed on two consecutive floors of the hospital building. On the basis of information provided by 53 bacteriological analysis, a ‘random’ grid of points was chosen and spatial geostatistics or [i]FAIk Kriging[/i] was applied and compared with the results of classical statistical analysis. [b]Results[/b]. Over 50% of the examined samples were positive for [i]Legionella pneumophila[/i]. LP 1 was isolated in 69% of samples from the ground floor and in 60% of sample from the first floor; LP 2–14 in 36% of sample from the ground floor and 24% from the first. The iso-estimation maps show clearly the most contaminated pipe and the difference in the diffusion of the different [i]L. pneumophila[/i] serogroups. [b]Conclusion.[/b] Experimental work has demonstrated that geostatistical methods applied to the microbiological analysis of water matrices allows a better modeling of the phenomenon under study, a greater potential for risk management and a greater choice of methods of prevention and environmental recovery to be put in place with respect to the classical statistical analysis.

  9. Geostatistical three-dimensional modeling of oolite shoals, St. Louis Limestone, southwest Kansas

    Science.gov (United States)

    Qi, L.; Carr, T.R.; Goldstein, R.H.

    2007-01-01

    In the Hugoton embayment of southwestern Kansas, reservoirs composed of relatively thin (oil. The geometry and distribution of oolitic deposits control the heterogeneity of the reservoirs, resulting in exploration challenges and relatively low recovery. Geostatistical three-dimensional (3-D) models were constructed to quantify the geometry and spatial distribution of oolitic reservoirs, and the continuity of flow units within Big Bow and Sand Arroyo Creek fields. Lithofacies in uncored wells were predicted from digital logs using a neural network. The tilting effect from the Laramide orogeny was removed to construct restored structural surfaces at the time of deposition. Well data and structural maps were integrated to build 3-D models of oolitic reservoirs using stochastic simulations with geometry data. Three-dimensional models provide insights into the distribution, the external and internal geometry of oolitic deposits, and the sedimentologic processes that generated reservoir intervals. The structural highs and general structural trend had a significant impact on the distribution and orientation of the oolitic complexes. The depositional pattern and connectivity analysis suggest an overall aggradation of shallow-marine deposits during pulses of relative sea level rise followed by deepening near the top of the St. Louis Limestone. Cemented oolitic deposits were modeled as barriers and baffles and tend to concentrate at the edge of oolitic complexes. Spatial distribution of porous oolitic deposits controls the internal geometry of rock properties. Integrated geostatistical modeling methods can be applicable to other complex carbonate or siliciclastic reservoirs in shallow-marine settings. Copyright ?? 2007. The American Association of Petroleum Geologists. All rights reserved.

  10. A geostatistical study of the uranium deposit at Kvanefjeld, the Ilimaussaq intrusion, South Greenland

    International Nuclear Information System (INIS)

    Lund Clausen, F.

    1982-05-01

    The uranium deposit at Kvanefjeld within the Ilimaussaq intrusion in South Greenland has been tested by diamond drilling, hole logging, chip sampling and field gamma-spectrometric surveys. Based on these different types of spatially distributed samples the uranium variation within the deposit was studied. The spatial variation, which comprises a large random component, was modelled, and the intrinsic function was used to establish gradetonnage curves by the best linear unbiased estimator of geostatistics (kriging). From data obtained by a ground surface gamma-spectrometric survey it is shown that the uranium variation is possibly subject to a spatial anisotropy consistent with the geology. The uranium variation has a second-order stationarity. A global estimation of the total reserves shows that single block grade values are always estimated with high errors. This is mainly caused by the poor spatial structure and the very sparse sampling pattern. The best way to solve this problem appears to be a selective type of kriging. The overall uranium reserves are estimated as 23600 tons with a mean grade of 297 ppm (cutoff grade 250 ppm U). Studies of data from a test adit show that local geostatistical estimation can be done with acceptably small errors provided that a close sampling pattern is used. A regression relationship is established to correct field gamma-spectrometric measures of bulk grades towards truer values. Multivariate cluster and discriminant analyses were used to classify lujavrite samples based on their trace element content. Misclassification is due to a possibly continuous transition between naujakasite lujavrite and arfvedsonite lujavrite. Some of the main mineralogical differences between the geological units are identified by the discriminating effect of the individual variable. (author)

  11. A framework for the organizational assumptions underlying safety culture

    International Nuclear Information System (INIS)

    Packer, Charles

    2002-01-01

    The safety culture of the nuclear organization can be addressed at the three levels of culture proposed by Edgar Schein. The industry literature provides a great deal of insight at the artefact and espoused value levels, although as yet it remains somewhat disorganized. There is, however, an overall lack of understanding of the assumption level of safety culture. This paper describes a possible framework for conceptualizing the assumption level, suggesting that safety culture is grounded in unconscious beliefs about the nature of the safety problem, its solution and how to organize to achieve the solution. Using this framework, the organization can begin to uncover the assumptions at play in its normal operation, decisions and events and, if necessary, engage in a process to shift them towards assumptions more supportive of a strong safety culture. (author)

  12. Different Random Distributions Research on Logistic-Based Sample Assumption

    Directory of Open Access Journals (Sweden)

    Jing Pan

    2014-01-01

    Full Text Available Logistic-based sample assumption is proposed in this paper, with a research on different random distributions through this system. It provides an assumption system of logistic-based sample, including its sample space structure. Moreover, the influence of different random distributions for inputs has been studied through this logistic-based sample assumption system. In this paper, three different random distributions (normal distribution, uniform distribution, and beta distribution are used for test. The experimental simulations illustrate the relationship between inputs and outputs under different random distributions. Thereafter, numerical analysis infers that the distribution of outputs depends on that of inputs to some extent, and this assumption system is not independent increment process, but it is quasistationary.

  13. Supporting calculations and assumptions for use in WESF safetyanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Hey, B.E.

    1997-03-07

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  14. Operation Cottage: A Cautionary Tale of Assumption and Perceptual Bias

    Science.gov (United States)

    2015-01-01

    but they can also set a lethal trap for unsuspecting mission planners , decisionmakers, and intelli- gence analysts.2 Assumptions are extremely...the planning process, but the planning staff must not become so wedded to their assumptions that they reject or overlook information that is not in...operations specialist who had served as principal planner for the Attu invasion. Major General Charles Corlett was to command the landing force, an

  15. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  16. Geostatistical modeling of a fluviodeltaic reservoir in the Huyapari Field, Hamaca area, in the Faja Petrolifera del Orinoco, Venezuela

    Energy Technology Data Exchange (ETDEWEB)

    De Ascencao, Erika M.; Munckton, Toni; Digregorio, Ricardo [Petropiar (Venezuela)

    2011-07-01

    The Huyapari field, situated within the Faja Petrolifera del Orinoco (FPO) of Venezuela presents unique problems in terms of modeling. This field is spread over a wide area and is therefore subject to variable oil quality and complex fluvial facies architecture. Ameriven and PDVSA have been working on characterizing the ld's reservoirs in this field since 2000 and the aim of this paper is to present these efforts. Among others, a 3-D seismic survey completed in 1998 and a stratigraphic framework built from 149 vertical wells were used for reservoir characterization. Geostatistical techniques such as sequential Gaussian simulation with locally varying mean and cloud transform were also used. Results showed that these geostatistical methods accurately represented the architecture and properties of the reservoir and its fluid distribution. This paper showed that the application of numerous different techniques in the Hamasca area permitted reservoir complexity to be captured.

  17. Multivariate Analysis and Modeling of Sediment Pollution Using Neural Network Models and Geostatistics

    Science.gov (United States)

    Golay, Jean; Kanevski, Mikhaïl

    2013-04-01

    The present research deals with the exploration and modeling of a complex dataset of 200 measurement points of sediment pollution by heavy metals in Lake Geneva. The fundamental idea was to use multivariate Artificial Neural Networks (ANN) along with geostatistical models and tools in order to improve the accuracy and the interpretability of data modeling. The results obtained with ANN were compared to those of traditional geostatistical algorithms like ordinary (co)kriging and (co)kriging with an external drift. Exploratory data analysis highlighted a great variety of relationships (i.e. linear, non-linear, independence) between the 11 variables of the dataset (i.e. Cadmium, Mercury, Zinc, Copper, Titanium, Chromium, Vanadium and Nickel as well as the spatial coordinates of the measurement points and their depth). Then, exploratory spatial data analysis (i.e. anisotropic variography, local spatial correlations and moving window statistics) was carried out. It was shown that the different phenomena to be modeled were characterized by high spatial anisotropies, complex spatial correlation structures and heteroscedasticity. A feature selection procedure based on General Regression Neural Networks (GRNN) was also applied to create subsets of variables enabling to improve the predictions during the modeling phase. The basic modeling was conducted using a Multilayer Perceptron (MLP) which is a workhorse of ANN. MLP models are robust and highly flexible tools which can incorporate in a nonlinear manner different kind of high-dimensional information. In the present research, the input layer was made of either two (spatial coordinates) or three neurons (when depth as auxiliary information could possibly capture an underlying trend) and the output layer was composed of one (univariate MLP) to eight neurons corresponding to the heavy metals of the dataset (multivariate MLP). MLP models with three input neurons can be referred to as Artificial Neural Networks with EXternal

  18. A Comparison of Traditional, Step-Path, and Geostatistical Techniques in the Stability Analysis of a Large Open Pit

    Science.gov (United States)

    Mayer, J. M.; Stead, D.

    2017-04-01

    With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.

  19. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases......: (1) IT PPM as the top management marketplace, (2) IT PPM as the cause of social dilemmas at the lower organizational levels (3) IT PPM as polity between different organizational interests, (4) IT PPM as power relations that suppress creativity and diversity. Our metaphors can be used by practitioners...

  20. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    articles across various research disciplines. We find and classify a stock of 107 relevant articles into four scientific discourses: the normative, the interpretive, the critical, and the dialogical discourses, as formulated by Deetz (1996). We find that the normative discourse dominates the IT PPM...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases......DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...

  1. On the Necessary and Sufficient Assumptions for UC Computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Nielsen, Jesper Buus; Orlandi, Claudio

    2010-01-01

    -transfer protocol for the stand-alone model. Since a KRA where the secret keys can be computed from the public keys is useless, and some setup assumption is needed for UC secure computation, this establishes the best we could hope for the KRA model: any non-trivial KRA is sufficient for UC computation. •  We show......We study the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions. We look at the common reference string model, the uniform random string model and the key-registration authority model (KRA), and provide new results...... for all of them. Perhaps most interestingly we show that: •  For even the minimal meaningful KRA, where we only assume that the secret key is a value which is hard to compute from the public key, one can UC securely compute any poly-time functionality if there exists a passive secure oblivious...

  2. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  3. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG......) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...... knowledge there has been no systematic study of the validity of the Markov assumption wrt.\\ web usage mining and the resulting quality of the mined browsing patterns. In this paper we systematically investigate the quality of browsing patterns mined from structures based on the Markov assumption. Formal...

  4. Evolution of Requirements and Assumptions for Future Exploration Missions

    Science.gov (United States)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  5. Geostatistical techniques to assess the influence of soil density on sugarcane productivity

    Science.gov (United States)

    Marques, Karina; Silva, Wellington; Almeida, Ceres; Bezerra, Joel; Almeida, Brivaldo; Siqueira, Glecio

    2013-04-01

    Spatial variation in some soil properties on small distances occur even on homogeneous areas with same soil class that can influence to crop productivity. This variability must be incorporated into the procedures and techniques used in agriculture. Thus, it is necessary to know it to optimize agricultural practices. This study aimed to evaluate the influence of soil density on the sugarcane productivity by geostatistical techniques. The area is located on Rio Formoso city, Pernambuco (Brazil), at latitude 08°38'91"S and longitude 35°16'08"W, where the climate is rainy tropical. About of 243 georeferenced undisturbed soil samples (clods) were collected on lowland area at three depths (0-20, 20-40 and 40-60cm) grid spacing of 15 x 30 m. The total area has 7.5 ha, divided equally into three subareas. Statistical and geostatistics analysis were done. It was found that soil density increased with depth Bulk density and can be used as an index of relative compaction. Machine weight, track or tire design and soil water content at the time of traffic are some of the factors that determine the amount of soil compaction and resulting changes in the plant root environment. These points can have influenced the highest soil density found in subarea 1. This subarea was intensively mechanized and it presents poor drainage and seasonal flood. Based on semivariograms models fitted, we can say that soil density showed spatial dependence in subarea 1 at all depths (Gaussian (0-20cm) and spherical both 20-40 and 40-60cm). Unlike this, the models fitted to subarea 2 were to 0-20 and 40-60cm depths, exponential and on subarea 3, at 0-20cm (Gaussian). Pure nugget effect was found on 20-40cm depth at the subareas 2 and 3, and 40-60cm on the subarea 3. Subarea 1 had higher soil density and lower sugarcane productivity thus, it is known that root development and nutrient uptake are directly influenced by soil density.

  6. Geostatistical analysis of morphometric features in the selected parts of the Sudetes (SW Poland)

    Science.gov (United States)

    Pawlowski, Lukasz; Szymanowski, Mariusz; Migon, Piotr

    2017-04-01

    Recent years have brought rapid development of quantitative techniques that are successfully applied in geomorphology. They open up new interpretation possibilities, even in seemingly very well recognized areas. In particular, we are talking about the geomorphometric and geostatistical techniques whose integration in Geographic Information Systems allows to look at the spatial pattern of landforms and process signatures from a new perspective. The morphology of the Sudetes, as of other mountain ranges in central Europe, is the result of protracted interaction of tectonic and surface processes, passive geological factors such as lithology and structure, and passage of time. This raises the question whether, and to which extent, these different controls and signals have resulted in similarities or differences in the morphometric structure of different parts within the same mountain range. In this paper we assume that geomorphic signals of various origins are expressed by a set of primary and secondary topographic attributes, which can be further analyzed as regional variables and modelled using geostatistical methods. Special attention is paid to variogram modelling. This method allows the identification of the spatial structure of the morphometric characteristics, its spatial scale and direction reflected in quantitative parameters of variograms (model functions, range, sill, nugget, anisotropy). This parameters for various areas are compared to find (dis-)similarities between different parts of the Sudetes. Thus, the main goals of the paper are: 1. To evaluate the usefulness of topographic attributes' variogram modelling for quantification of the spatial morphometric structure of mountain areas, on the example of medium-altitude, non-glaciated mountain terrain. 2. To compare different parts of the Sudetes to find similarities and differences between them and to interpret the findings through the examination of geology and geomorphology of the region. The analysis

  7. Factors affecting paddy soil arsenic concentration in Bangladesh: prediction and uncertainty of geostatistical risk mapping.

    Science.gov (United States)

    Ahmed, Zia U; Panaullah, Golam M; DeGloria, Stephen D; Duxbury, John M

    2011-12-15

    Knowledge of the spatial correlation of soil arsenic (As) concentrations with environmental variables is needed to assess the nature and extent of the risk of As contamination from irrigation water in Bangladesh. We analyzed 263 paired groundwater and paddy soil samples covering highland (HL) and medium highland-1 (MHL-1) land types for geostatistical mapping of soil As and delineation of As contaminated areas in Tala Upazilla, Satkhira district. We also collected 74 non-rice soil samples to assess the baseline concentration of soil As for this area. The mean soil As concentrations (mg/kg) for different land types under rice and non-rice crops were: rice-MHL-1 (21.2)>rice-HL (14.1)>non-rice-MHL-1 (11.9)>non-rice-HL (7.2). Multiple regression analyses showed that irrigation water As, Fe, land elevation and years of tubewell operation are the important factors affecting the concentrations of As in HL paddy soils. Only years of tubewell operation affected As concentration in the MHL-1 paddy soils. Quantitatively similar increases in soil As above the estimated baseline-As concentration were observed for rice soils on HL and MHL-1 after 6-8 years of groundwater irrigation, implying strong retention of As added in irrigation water in both land types. Application of single geostatistical methods with secondary variables such as regression kriging (RK) and ordinary co-kriging (OCK) gave little improvement in prediction of soil As over ordinary kriging (OK). Comparing single prediction methods, kriging within strata (KWS), the combination of RK for HL and OCK for MHL-1, gave more accurate soil As predictions and showed the lowest misclassification of declaring a location "contaminated" with respect to 14.8 mg As/kg, the highest value obtained for the baseline soil As concentration. Prediction of soil As buildup over time indicated that 75% or the soils cropped to rice would contain at least 30 mg/L As by the year 2020. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Changing Assumptions and Progressive Change in Theories of Strategic Organization

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Hallberg, Niklas L.

    2017-01-01

    A commonly held view is that strategic organization theories progress as a result of a Popperian process of bold conjectures and systematic refutations. However, our field also witnesses vibrant debates or disputes about the specific assumptions that our theories rely on, and although these debates...... are often decoupled from the results of empirical testing, changes in assumptions seem closely intertwined with theoretical progress. Using the case of the resource-based view, we suggest that progressive change in theories of strategic organization may come about as a result of scholarly debate and dispute...

  9. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    , propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  10. Integration of dynamical data in a geostatistical model of reservoir; Integration des donnees dynamiques dans un modele geostatistique de reservoir

    Energy Technology Data Exchange (ETDEWEB)

    Costa Reis, L.

    2001-01-01

    We have developed in this thesis a methodology of integrated characterization of heterogeneous reservoirs, from geologic modeling to history matching. This methodology is applied to the reservoir PBR, situated in Campos Basin, offshore Brazil, which has been producing since June 1979. This work is an extension of two other thesis concerning geologic and geostatistical modeling of the reservoir PBR from well data and seismic information. We extended the geostatistical litho-type model to the whole reservoir by using a particular approach of the non-stationary truncated Gaussian simulation method. This approach facilitated the application of the gradual deformation method to history matching. The main stages of the methodology for dynamic data integration in a geostatistical reservoir model are presented. We constructed a reservoir model and the initial difficulties in the history matching led us to modify some choices in the geological, geostatistical and flow models. These difficulties show the importance of dynamic data integration in reservoir modeling. The petrophysical property assignment within the litho-types was done by using well test data. We used an inversion procedure to evaluate the petrophysical parameters of the litho-types. The up-scaling is a necessary stage to reduce the flow simulation time. We compared several up-scaling methods and we show that the passage from the fine geostatistical model to the coarse flow model should be done very carefully. The choice of the fitting parameter depends on the objective of the study. In the case of the reservoir PBR, where water is injected in order to improve the oil recovery, the water rate of the producing wells is directly related to the reservoir heterogeneity. Thus, the water rate was chosen as the fitting parameter. We obtained significant improvements in the history matching of the reservoir PBR. First, by using a method we have proposed, called patchwork. This method allows us to built a coherent

  11. Identification of high-permeability subsurface structures with multiple point geostatistics and normal score ensemble Kalman filter

    Science.gov (United States)

    Zovi, Francesco; Camporese, Matteo; Hendricks Franssen, Harrie-Jan; Huisman, Johan Alexander; Salandin, Paolo

    2017-05-01

    Alluvial aquifers are often characterized by the presence of braided high-permeable paleo-riverbeds, which constitute an interconnected preferential flow network whose localization is of fundamental importance to predict flow and transport dynamics. Classic geostatistical approaches based on two-point correlation (i.e., the variogram) cannot describe such particular shapes. In contrast, multiple point geostatistics can describe almost any kind of shape using the empirical probability distribution derived from a training image. However, even with a correct training image the exact positions of the channels are uncertain. State information like groundwater levels can constrain the channel positions using inverse modeling or data assimilation, but the method should be able to handle non-Gaussianity of the parameter distribution. Here the normal score ensemble Kalman filter (NS-EnKF) was chosen as the inverse conditioning algorithm to tackle this issue. Multiple point geostatistics and NS-EnKF have already been tested in synthetic examples, but in this study they are used for the first time in a real-world case study. The test site is an alluvial unconfined aquifer in northeastern Italy with an extension of approximately 3 km2. A satellite training image showing the braid shapes of the nearby river and electrical resistivity tomography (ERT) images were used as conditioning data to provide information on channel shape, size, and position. Measured groundwater levels were assimilated with the NS-EnKF to update the spatially distributed groundwater parameters (hydraulic conductivity and storage coefficients). Results from the study show that the inversion based on multiple point geostatistics does not outperform the one with a multiGaussian model and that the information from the ERT images did not improve site characterization. These results were further evaluated with a synthetic study that mimics the experimental site. The synthetic results showed that only for a much

  12. Geostatistical methods for rock mass quality prediction using borehole and geophysical survey data

    Science.gov (United States)

    Chen, J.; Rubin, Y.; Sege, J. E.; Li, X.; Hehua, Z.

    2015-12-01

    For long, deep tunnels, the number of geotechnical borehole investigations during the preconstruction stage is generally limited. Yet tunnels are often constructed in geological structures with complex geometries, and in which the rock mass is fragmented from past structural deformations. Tunnel Geology Prediction (TGP) is a geophysical technique widely used during tunnel construction in China to ensure safety during construction and to prevent geological disasters. In this paper, geostatistical techniques were applied in order to integrate seismic velocity from TGP and borehole information into spatial predictions of RMR (Rock Mass Rating) in unexcavated areas. This approach is intended to apply conditional probability methods to transform seismic velocities to directly observed RMR values. The initial spatial distribution of RMR, inferred from the boreholes, was updated by including geophysical survey data in a co-kriging approach. The method applied to a real tunnel project shows significant improvements in rock mass quality predictions after including geophysical survey data, leading to better decision-making for construction safety design.

  13. Statistics and geostatistics: Kriging and use of hemivariogram functions in the structural investigation of uranium deposits

    International Nuclear Information System (INIS)

    Lucero Michaut, H.N.

    1980-01-01

    After presenting some general conceptual considerations regarding the theory of regionalized variables, the paper deals with specific applications of the intrinsic dispersion law to the determination, description and quantification of structures. It then briefly describes two uranium deposits in Cordoba province, the study of which yielded the basic data and parameters for compiling the geostatistical results presented. Before taking up the matter of structural interpretations, it refers briefly to the mathematical relationship between the number of sampling points available and the number of directions that can be investigated by the variogram method and also emphasizes the need for quantifying regionalization concepts on the basis of a table of absolute dimensionalities. In the case of the ''Rodolfo'' deposit it presents and comments on the hemivariograms for concentrations, thicknesses and accumulations, drawing attention at the same time to the existence of significant nest-like phenomena (gigogne structures). In this connection there is also a discussion of the case of iterative lenticular mineralization on a natural and a simulated model. The ''Schlagintweit'' deposit is dealt with in the same way, with descriptions and evaluations of the subjacent structures revealed by the hemivariographic analysis of grades, mineralization thicknesses and accumulations. This is followed by some considerations on the possibility of applying Krige and Matheron correctors in the moderation of anomalous mineralized thicknesses. In conclusion, the paper presents a ''range ellipse'' for grades; this is designed to supplement the grid of sampling points for the ''Rodolfo'' deposit by means of Matheronian kriging techniques. (author)

  14. Geostatistical methods in the assessment of the spatial variability of the quality of river water

    Science.gov (United States)

    Krasowska, Małgorzata; Banaszuk, Piotr

    2017-11-01

    The research was conducted in the agricultural catchment in north-eastern Poland. The aim of this study was to check how geostatistical analysis can be useful for the detection zones and forms of supply stream by water from different sources. The work was included the implementation of hydrochemical profiles. These profiles were made by measuring the electrical conductivity (EC) values and temperature along the river. On the basis of these results, the authors calculated the coefficient of Moran I and performed semivariogram and found that the EC values are correlated on a stretch of about 140 m. This means that the spatial correlation between samples of water in the stream is readable over a distance of about 140 meters. Therefore it is believed that the degree of water mineralization on this section is shaped by water entering the river channel migration in different ways: through tributaries, leachate drainage and surface runoff. In the case of the analyzed catchment, the potential sources of pollution were drainage systems. Therefore, the spatial analysis allowed the identification pollution sources in a catchment, especially in drained agricultural catchments.

  15. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming

    2013-03-01

    The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  16. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming; Dai, Zhenxue; Zachara, John; Chen, Xingyuan

    2017-03-01

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. The spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.

  17. The detection of thermophilous forest hotspots in Poland using geostatistical interpolation of plant richness

    Directory of Open Access Journals (Sweden)

    Marcin Kiedrzyński

    2014-07-01

    Full Text Available Attempts to study biodiversity hotspots on a regional scale should combine compositional and functionalist criteria. The detection of hotspots in this study uses one ecologically similar group of high conservation value species as hotspot indicators, as well as focal habitat indicators, to detect the distribution of suitable environmental conditions. The method is assessed with reference to thermophilous forests in Poland – key habitats for many rare and relict species. Twenty-six high conservation priority species were used as hotspot indicators, and ten plant taxa characteristic of the Quercetalia pubescenti-petraeae phytosociological order were used as focal habitat indicators. Species distribution data was based on a 10 × 10 km grid. The number of species per grid square was interpolated by the ordinary kriging geostatistical method. Our analysis largely determined the distribution of areas with concentration of thermophilous forest flora, but also regional disjunctions and geographical barriers. Indicator species richness can be interpreted as a reflection of the actual state of habitat conditions. It can also be used to determine the location of potential species refugia and possible past and future migration routes.

  18. Epidemiological study of hazelnut bacterial blight in central Italy by using laboratory analysis and geostatistics.

    Directory of Open Access Journals (Sweden)

    Jay Ram Lamichhane

    Full Text Available Incidence of Xanthomonas arboricola pv. corylina, the causal agent of hazelnut bacterial blight, was analyzed spatially in relation to the pedoclimatic factors. Hazelnut grown in twelve municipalities situated in the province of Viterbo, central Italy was studied. A consistent number of bacterial isolates were obtained from the infected tissues of hazelnut collected in three years (2010-2012. The isolates, characterized by phenotypic tests, did not show any difference among them. Spatial patterns of pedoclimatic data, analyzed by geostatistics showed a strong positive correlation of disease incidence with higher values of rainfall, thermal shock and soil nitrogen; a weak positive correlation with soil aluminium content and a strong negative correlation with the values of Mg/K ratio. No correlation of the disease incidence was found with soil pH. Disease incidence ranged from very low (<1% to very high (almost 75% across the orchards. Young plants (4-year old were the most affected by the disease confirming a weak negative correlation of the disease incidence with plant age. Plant cultivars did not show any difference in susceptibility to the pathogen. Possible role of climate change on the epidemiology of the disease is discussed. Improved management practices are recommended for effective control of the disease.

  19. Characterisation and geostatistical analysis of clay rocks in underground facilities using hyper-spectral images

    International Nuclear Information System (INIS)

    Becker, J.K.; Marschall, P.; Brunner, P.; Cholet, C.; Renard, P.; Buckley, S.; Kurz, T.

    2012-01-01

    Document available in extended abstract form only. Flow and transport processes in geological formations are controlled by the porosity and permeability which in turn are mainly controlled by the fabric and the mineralogical composition of the rock. For the assessment of transport processes in water-saturated Clay-stone formations, the relevant scales are ranging essentially from kilometers to nanometers. The spatial variability of the mineralogical composition is a key indicator for the separation of transport scales and for the derivation of the effective transport properties at a given scale. Various laboratory and in-situ techniques are available for characterizing the mineralogical composition of a rock on different scales. The imaging spectroscopy presented in this paper is a new site investigation method suitable for mapping the mineralogical composition of geological formations in 2D on a large range of scales. A combination of imaging spectrometry with other site characterization methods allows the inference of the spatial variability of the mineralogical composition in 3D over a wide range of scales with the help of advanced geostatistical methods. The method of image spectrometry utilizes the fact that the reflection of electromagnetic radiation from a surface is a function of the wavelength, the chemical-mineralogical surface properties, and physical parameters such as the grain size and surface roughness. In remote sensing applications using the sun as the light source, the reflectance is measured within the visible and infrared range, according to the atmospheric transmissibility. Many rock-forming minerals exhibit diagnostic absorption features within this range, which are caused by electronic and vibrational processes within the crystal lattice. The exact wavelength of an absorption feature is controlled by the type of ion, as well as the position of the ion within the lattice. Spectral signatures of minerals are described by a number of authors

  20. Application of Geostatistical Methods and Machine Learning for spatio-temporal Earthquake Cluster Analysis

    Science.gov (United States)

    Schaefer, A. M.; Daniell, J. E.; Wenzel, F.

    2014-12-01

    Earthquake clustering tends to be an increasingly important part of general earthquake research especially in terms of seismic hazard assessment and earthquake forecasting and prediction approaches. The distinct identification and definition of foreshocks, aftershocks, mainshocks and secondary mainshocks is taken into account using a point based spatio-temporal clustering algorithm originating from the field of classic machine learning. This can be further applied for declustering purposes to separate background seismicity from triggered seismicity. The results are interpreted and processed to assemble 3D-(x,y,t) earthquake clustering maps which are based on smoothed seismicity records in space and time. In addition, multi-dimensional Gaussian functions are used to capture clustering parameters for spatial distribution and dominant orientations. Clusters are further processed using methodologies originating from geostatistics, which have been mostly applied and developed in mining projects during the last decades. A 2.5D variogram analysis is applied to identify spatio-temporal homogeneity in terms of earthquake density and energy output. The results are mitigated using Kriging to provide an accurate mapping solution for clustering features. As a case study, seismic data of New Zealand and the United States is used, covering events since the 1950s, from which an earthquake cluster catalogue is assembled for most of the major events, including a detailed analysis of the Landers and Christchurch sequences.

  1. The geostatistic-based spatial distribution variations of soil salts under long-term wastewater irrigation.

    Science.gov (United States)

    Wu, Wenyong; Yin, Shiyang; Liu, Honglu; Niu, Yong; Bao, Zhe

    2014-10-01

    The purpose of this study was to determine and evaluate the spatial changes in soil salinity by using geostatistical methods. The study focused on the suburb area of Beijing, where urban development led to water shortage and accelerated wastewater reuse to farm irrigation for more than 30 years. The data were then processed by GIS using three different interpolation techniques of ordinary kriging (OK), disjunctive kriging (DK), and universal kriging (UK). The normality test and overall trend analysis were applied for each interpolation technique to select the best fitted model for soil parameters. Results showed that OK was suitable for soil sodium adsorption ratio (SAR) and Na(+) interpolation; UK was suitable for soil Cl(-) and pH; DK was suitable for soil Ca(2+). The nugget-to-sill ratio was applied to evaluate the effects of structural and stochastic factors. The maps showed that the areas of non-saline soil and slight salinity soil accounted for 6.39 and 93.61%, respectively. The spatial distribution and accumulation of soil salt were significantly affected by the irrigation probabilities and drainage situation under long-term wastewater irrigation.

  2. Spatiotemporal mapping of ground water pollution in a Greek lignite basin, using geostatistics

    Energy Technology Data Exchange (ETDEWEB)

    Modis, K. [National Technical Univ. of Athens, Athens (Greece)

    2010-07-01

    An issue of significant interest in the mining industry in Greece is the occurrence of chemical pollutants in ground water. Ammonium, nitrites and nitrates concentrations have been monitored through an extensive sampling network in the Ptolemais lignite opencast mining area in Greece. Due to intensive mining efforts in the area, the surface topology is continuously altered, affecting the life span of the water boreholes and resulting in messy spatiotemporal distribution of data. This paper discussed the spatiotemporal mapping of ground water pollution in the Ptolemais lignite basin, using geostatistics. More specifically, the spatiotemporal distribution of ground water contamination was examined by the application of the bayesian maximum entropy theory which allows merging spatial and temporal estimations in a single model. The paper provided a description of the site and discussed the materials and methods, including samples and statistics; variography; and spatiotemporal mapping. It was concluded that in the case of the Ptolemais mining area, results revealed an underlying average yearly variation pattern of pollutant concentrations. Inspection of the produced spatiotemporal maps demonstrated a continuous increase in the risk of ammonium contamination, while risk for the other two pollutants appeared in hot spots. 18 refs., 1 tab., 7 figs.

  3. Forward modeling of gravity data using geostatistically generated subsurface density variations

    Science.gov (United States)

    Phelps, Geoffrey

    2016-01-01

    Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.

  4. Spatiotemporal mapping of ground water pollution in a Greek lignite basin, using geostatistics

    International Nuclear Information System (INIS)

    Modis, K.

    2010-01-01

    An issue of significant interest in the mining industry in Greece is the occurrence of chemical pollutants in ground water. Ammonium, nitrites and nitrates concentrations have been monitored through an extensive sampling network in the Ptolemais lignite opencast mining area in Greece. Due to intensive mining efforts in the area, the surface topology is continuously altered, affecting the life span of the water boreholes and resulting in messy spatiotemporal distribution of data. This paper discussed the spatiotemporal mapping of ground water pollution in the Ptolemais lignite basin, using geostatistics. More specifically, the spatiotemporal distribution of ground water contamination was examined by the application of the bayesian maximum entropy theory which allows merging spatial and temporal estimations in a single model. The paper provided a description of the site and discussed the materials and methods, including samples and statistics; variography; and spatiotemporal mapping. It was concluded that in the case of the Ptolemais mining area, results revealed an underlying average yearly variation pattern of pollutant concentrations. Inspection of the produced spatiotemporal maps demonstrated a continuous increase in the risk of ammonium contamination, while risk for the other two pollutants appeared in hot spots. 18 refs., 1 tab., 7 figs.

  5. [Geostatistical analysis on distribution dynamics of Myzus persicae (Sulzer) in flue-cured tobacco field].

    Science.gov (United States)

    Xia, Peng-liang; Liu, Ying-hong; Fan, Jun; Tan, Jun

    2015-02-01

    Abstract: Myzus persicae belonging to Aphididae, Hemiptera, is an important migratory pest in tobacco field. As nymph and adult, it sucks the juice, breeds the mildew stains disease, spreads tobacco virus diseases and causes huge losses to the yield and quality. The distribution pattern and dynamics of winged and wingless aphids in the field were investigated from the transplanting of tobacco to the harvesting stage of mid-place tobacco leaves in Enshi, Hubei. The semivariable function characteristics were analyzed by geostatistical method, and the field migration pattern were simulated. The results showed that the population dynamics of winged aphids in Enshi were of bimodal curve, with two peaks at 3 weeks after transplanting and 2 weeks after multi-topping of tobacco leaves, and there were five-step process such as random, aggregation, random, aggregation and random. The population dynamics of wingless peach aphids were of single-peak curve, getting its peak before multi-topping, and had random, aggregation, random three-step process. Human factors and the hosts had considerable effects on the population density. Spatial distribution simulation-interpolation-figure could clearly reflect the dynamics of tobacco aphids. Combined with the Pearson correlation analysis, we found that the population density was low and highly concentrated as winged type in the immigration period, which was the key period for the management of peach aphids.

  6. A geostatistical approach to data harmonization - Application to radioactivity exposure data

    Science.gov (United States)

    Baume, O.; Skøien, J. O.; Heuvelink, G. B. M.; Pebesma, E. J.; Melles, S. J.

    2011-06-01

    Environmental issues such as air, groundwater pollution and climate change are frequently studied at spatial scales that cross boundaries between political and administrative regions. It is common for different administrations to employ different data collection methods. If these differences are not taken into account in spatial interpolation procedures then biases may appear and cause unrealistic results. The resulting maps may show misleading patterns and lead to wrong interpretations. Also, errors will propagate when these maps are used as input to environmental process models. In this paper we present and apply a geostatistical model that generalizes the universal kriging model such that it can handle heterogeneous data sources. The associated best linear unbiased estimation and prediction (BLUE and BLUP) equations are presented and it is shown that these lead to harmonized maps from which estimated biases are removed. The methodology is illustrated with an example of country bias removal in a radioactivity exposure assessment for four European countries. The application also addresses multicollinearity problems in data harmonization, which arise when both artificial bias factors and natural drifts are present and cannot easily be distinguished. Solutions for handling multicollinearity are suggested and directions for further investigations proposed.

  7. Determining site-specific background level with geostatistics for remediation of heavy metals in neighborhood soils

    Directory of Open Access Journals (Sweden)

    Tammy M. Milillo

    2017-03-01

    Full Text Available The choice of a relevant, uncontaminated site for the determination of site-specific background concentrations for pollutants is critical for planning remediation of a contaminated site. The guidelines used to arrive at concentration levels vary from state to state, complicating this process. The residential neighborhood of Hickory Woods in Buffalo, NY is an area where heavy metal concentrations and spatial distributions were measured to plan remediation. A novel geostatistics based decision making framework that relies on maps generated from indicator kriging (IK and indicator co-kriging (ICK of samples from the contaminated site itself is shown to be a viable alternative to the traditional method of choosing a reference site for remediation planning. GIS based IK and ICK, and map based analysis are performed on lead and arsenic surface and subsurface datasets to determine site-specific background concentration levels were determined to be 50 μg/g for lead and 10 μg/g for arsenic. With these results, a remediation plan was proposed which identified regions of interest and maps were created to effectively communicate the results to the environmental agencies, residents and other interested parties.

  8. Geostatistics as a tool to study mite dispersion in physic nut plantations.

    Science.gov (United States)

    Rosado, J F; Picanço, M C; Sarmento, R A; Pereira, R M; Pedro-Neto, M; Galdino, T V S; de Sousa Saraiva, A; Erasmo, E A L

    2015-08-01

    Spatial distribution studies in pest management identify the locations where pest attacks on crops are most severe, enabling us to understand and predict the movement of such pests. Studies on the spatial distribution of two mite species, however, are rather scarce. The mites Polyphagotarsonemus latus and Tetranychus bastosi are the major pests affecting physic nut plantations (Jatropha curcas). Therefore, the objective of this study was to measure the spatial distributions of P. latus and T. bastosi in the physic nut plantations. Mite densities were monitored over 2 years in two different plantations. Sample locations were georeferenced. The experimental data were analyzed using geostatistical analyses. The total mite density was found to be higher when only one species was present (T. bastosi). When both the mite species were found in the same plantation, their peak densities occurred at different times. These mites, however, exhibited uniform spatial distribution when found at extreme densities (low or high). However, the mites showed an aggregated distribution in intermediate densities. Mite spatial distribution models were isotropic. Mite colonization commenced at the periphery of the areas under study, whereas the high-density patches extended until they reached 30 m in diameter. This has not been reported for J. curcas plants before.

  9. A Bayesian spatio-temporal geostatistical model with an auxiliary lattice for large datasets

    KAUST Repository

    Xu, Ganggang

    2015-01-01

    When spatio-temporal datasets are large, the computational burden can lead to failures in the implementation of traditional geostatistical tools. In this paper, we propose a computationally efficient Bayesian hierarchical spatio-temporal model in which the spatial dependence is approximated by a Gaussian Markov random field (GMRF) while the temporal correlation is described using a vector autoregressive model. By introducing an auxiliary lattice on the spatial region of interest, the proposed method is not only able to handle irregularly spaced observations in the spatial domain, but it is also able to bypass the missing data problem in a spatio-temporal process. Because the computational complexity of the proposed Markov chain Monte Carlo algorithm is of the order O(n) with n the total number of observations in space and time, our method can be used to handle very large spatio-temporal datasets with reasonable CPU times. The performance of the proposed model is illustrated using simulation studies and a dataset of precipitation data from the coterminous United States.

  10. Definition of radon prone areas in Friuli Venezia Giulia region, Italy, using geostatistical tools.

    Science.gov (United States)

    Cafaro, C; Bossew, P; Giovani, C; Garavaglia, M

    2014-12-01

    Studying the geographical distribution of indoor radon concentration, using geostatistical interpolation methods, has become common for predicting and estimating the risk to the population. Here we analyse the case of Friuli Venezia Giulia (FVG), the north easternmost region of Italy. Mean value and standard deviation are, respectively, 153 Bq/m(3) and 183 Bq/m(3). The geometric mean value is 100 Bq/m(3). Spatial datasets of indoor radon concentrations are usually affected by clustering and apparent non-stationarity issues, which can eventually yield arguable results. The clustering of the present dataset seems to be non preferential. Therefore the areal estimations are not expected to be affected. Conversely, nothing can be said on the non stationarity issues and its effects. After discussing the correlation of geology with indoor radon concentration It appears they are created by the same geologic features influencing the mean and median values, and can't be eliminated via a map-based approach. To tackle these problems, in this work we deal with multiple definitions of RPA, but only in quaternary areas of FVG, using extensive simulation techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Batch orographic interpolation of monthly precipitation based on free-of-charge geostatistical tools

    Science.gov (United States)

    Ledvinka, Ondrej

    2017-11-01

    The effects of possible climate change on water resources in prescribed areas (e.g. river basins) are intensively studied in hydrology. These resources are highly dependent on precipitation totals. When focusing on long-term changes in climate variables, one has to rely on station measurements. However, hydrologists need the information on spatial distribution of precipitation over the areas. For this purpose, the spatial interpolation techniques must be employed. In Czechia, where the addition of elevation co-variables proved to be a good choice, several GIS tools exist that are able to produce time series necessary for climate change analyses. Nevertheless, these tools are exclusively based on commercial software and there is a lack of free-of-charge tools that could be used by everyone. Here, selected free-of-charge geostatistical tools were utilized in order to produce monthly precipitation time series representing six river basins in the Ore Mountains located in NW Bohemia, Czechia and SE Saxony, Germany. The produced series span from January 1961 to December 2012. Rain-gauge data from both Czechia and Germany were used. The universal kriging technique was employed where a multiple linear regression (based on elevation and coordinates) residuals were interpolated. The final time series seem to be homogeneous.

  12. Integrating address geocoding, land use regression, and spatiotemporal geostatistical estimation for groundwater tetrachloroethylene.

    Science.gov (United States)

    Messier, Kyle P; Akita, Yasuyuki; Serre, Marc L

    2012-03-06

    Geographic information systems (GIS) based techniques are cost-effective and efficient methods used by state agencies and epidemiology researchers for estimating concentration and exposure. However, budget limitations have made statewide assessments of contamination difficult, especially in groundwater media. Many studies have implemented address geocoding, land use regression, and geostatistics independently, but this is the first to examine the benefits of integrating these GIS techniques to address the need of statewide exposure assessments. A novel framework for concentration exposure is introduced that integrates address geocoding, land use regression (LUR), below detect data modeling, and Bayesian Maximum Entropy (BME). A LUR model was developed for tetrachloroethylene that accounts for point sources and flow direction. We then integrate the LUR model into the BME method as a mean trend while also modeling below detects data as a truncated Gaussian probability distribution function. We increase available PCE data 4.7 times from previously available databases through multistage geocoding. The LUR model shows significant influence of dry cleaners at short ranges. The integration of the LUR model as mean trend in BME results in a 7.5% decrease in cross validation mean square error compared to BME with a constant mean trend.

  13. Mapping mean annual and monthly river discharges: geostatistical developments for incorporating river network dependencies

    International Nuclear Information System (INIS)

    Sauquet, Eric

    2004-01-01

    Regional hydrology is one topic that shows real improvement in partly due to new statistical development and computation facilities. Nevertheless theoretical difficulties for mapping river regime characteristics or recover these features at un gauged location remain because of the nature of the variable under study: river flows are related to a specific area that is defined by the drainage basin, are spatially organised by the river network with upstream-downstream dependencies. Estimations of hydrological descriptors are required for studying links with ecological processes at different spatial scale, from local site where biological or/and water quality data are available to large scale for sustainable development purposes. This presentation aims at describing a method for runoff pattern along the main river network. The approach dedicated to mean annual runoff is based on geostatistical interpolation procedures to which a constraint of water budget has been added. Expansion in Empirical Orthogonal Function has been considered in combination with kriging for interpolating mean monthly discharges. The methodologies are implemented within a Geographical Information System and illustrated by two study cases (two large basins in France). River flow regime descriptors are estimated for basins of more than 50km 2 . Opportunities of collaboration with a partition of France into hydro-eco regions derived from geology and climate considerations is discussed. (Author)

  14. Real-time reservoir geological model updating using the hybrid EnKF and geostatistical technique

    Energy Technology Data Exchange (ETDEWEB)

    Li, H.; Chen, S.; Yang, D. [Regina Univ., SK (Canada). Petroleum Technology Research Centre

    2008-07-01

    Reservoir simulation plays an important role in modern reservoir management. Multiple geological models are needed in order to analyze the uncertainty of a given reservoir development scenario. Ideally, dynamic data should be incorporated into a reservoir geological model. This can be done by using history matching and tuning the model to match the past performance of reservoir history. This study proposed an assisted history matching technique to accelerate and improve the matching process. The Ensemble Kalman Filter (EnKF) technique, which is an efficient assisted history matching method, was integrated with a conditional geostatistical simulation technique to dynamically update reservoir geological models. The updated models were constrained to dynamic data, such as reservoir pressure and fluid saturations, and approaches geologically realistic at each time step by using the EnKF technique. The new technique was successfully applied in a heterogeneous synthetic reservoir. The uncertainty of the reservoir characterization was significantly reduced. More accurate forecasts were obtained from the updated models. 3 refs., 2 figs.

  15. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    Science.gov (United States)

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  16. Geostatistical Approach to Find ‘Hotspots’ Where Biodiversity is at Risk in a Transition Country

    Directory of Open Access Journals (Sweden)

    Petrişor Alexandru-Ionuţ

    2014-10-01

    Full Text Available Global change‟ is a relatively recent concept, related to the energy - land use - climate change nexus, and designated to include all changes produced by the human species and the consequences of its activities over natural ecological complexes and biodiversity. The joint effects of these drivers of change are particularly relevant to understanding the changes of biodiversity. This study overlaps results of previous studies developed in Romania to find, explain and predict potential threats on biodiversity, including the effects of very high temperatures and low precipitations, urban sprawl and deforestation in order to identify „hotspots‟ of high risk for the loss of biodiversity using geostatistical tools. The results found two hotspots, one in the center and the other one in the south, and show that the area affected by three factors simultaneously represents 0.2% of the national territory, while paired effects cover 4% of it. The methodological advantage of this approach is its capacity to pinpoint hotspots with practical relevance. Nevertheless, its generalizing character impairs its use at the local scale..

  17. A geostatistical approach to large-scale disease mapping with temporal misalignment.

    Science.gov (United States)

    Hund, Lauren; Chen, Jarvis T; Krieger, Nancy; Coull, Brent A

    2012-09-01

    Temporal boundary misalignment occurs when area boundaries shift across time (e.g., census tract boundaries change at each census year), complicating the modeling of temporal trends across space. Large area-level datasets with temporal boundary misalignment are becoming increasingly common in practice. The few existing approaches for temporally misaligned data do not account for correlation in spatial random effects over time. To overcome issues associated with temporal misalignment, we construct a geostatistical model for aggregate count data by assuming that an underlying continuous risk surface induces spatial correlation between areas. We implement the model within the framework of a generalized linear mixed model using radial basis splines. Using this approach, boundary misalignment becomes a nonissue. Additionally, this disease-mapping framework facilitates fast, easy model fitting by using a penalized quasilikelihood approximation to maximum likelihood estimation. We anticipate that the method will also be useful for large disease-mapping datasets for which fully Bayesian approaches are infeasible. We apply our method to assess socioeconomic trends in breast cancer incidence in Los Angeles between the periods 1988-1992 and 1998-2002. © 2011, The International Biometric Society.

  18. Determination of homogeneous zones for liming recommendations of black pepper using geostatistics

    Directory of Open Access Journals (Sweden)

    Ivoney Gontijo

    Full Text Available ABSTRACT Studies aimed at determining homogeneous zones and the spatial variability of soil characteristics may improve the efficiency of agricultural input applications. The purpose of this study was to determine homogeneous zones for liming applications and to characterize the spatial variability of characteristics related to soil acidity and productivity in an Oxisol cultivated with black pepper (Piper nigrum L.. This study was carried out in São Mateus, state of Espírito Santo, Brazil. The experimental site was 100 x 120 m. A grid with 126 sampling points was established. Three soil sub-samples were collected at each sampling point in the black pepper canopy areas, at a 0-0.20 m depth. Crop productivity was estimated by harvesting the three plants neighboring each sampling point. Descriptive statistics and geostatistical analyses were performed. Homogeneous management zones were defined based on map of liming needs. Mathematical models adjusted to semivariograms indicated that all of the studied variables exhibited spatial dependency. An analysis of the spatial variability together with the definition of homogeneous zones can be used to increase the efficiency of soil liming.

  19. Geostatistical analysis of tritium, groundwater age and other noble gas derived parameters in California.

    Science.gov (United States)

    Visser, A; Moran, J E; Hillegonds, Darren; Singleton, M J; Kulongoski, Justin T; Belitz, Kenneth; Esser, B K

    2016-03-15

    Key characteristics of California groundwater systems related to aquifer vulnerability, sustainability, recharge locations and mechanisms, and anthropogenic impact on recharge are revealed in a spatial geostatistical analysis of a unique data set of tritium, noble gases and other isotopic analyses unprecedented in size at nearly 4000 samples. The correlation length of key groundwater residence time parameters varies between tens of kilometers ((3)H; age) to the order of a hundred kilometers ((4)Heter; (14)C; (3)Hetrit). The correlation length of parameters related to climate, topography and atmospheric processes is on the order of several hundred kilometers (recharge temperature; δ(18)O). Young groundwater ages that highlight regional recharge areas are located in the eastern San Joaquin Valley, in the southern Santa Clara Valley Basin, in the upper LA basin and along unlined canals carrying Colorado River water, showing that much of the recent recharge in central and southern California is dominated by river recharge and managed aquifer recharge. Modern groundwater is found in wells with the top open intervals below 60 m depth in the southeastern San Joaquin Valley, Santa Clara Valley and Los Angeles basin, as the result of intensive pumping and/or managed aquifer recharge operations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Inverse modeling of hydraulic tests in fractured crystalline rock based on a transition probability geostatistical approach

    Science.gov (United States)

    Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel

    2011-12-01

    This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.

  1. The geostatistical approach for structural and stratigraphic framework analysis of offshore NW Bonaparte Basin, Australia

    International Nuclear Information System (INIS)

    Wahid, Ali; Salim, Ahmed Mohamed Ahmed; Yusoff, Wan Ismail Wan; Gaafar, Gamal Ragab

    2016-01-01

    Geostatistics or statistical approach is based on the studies of temporal and spatial trend, which depend upon spatial relationships to model known information of variable(s) at unsampled locations. The statistical technique known as kriging was used for petrophycial and facies analysis, which help to assume spatial relationship to model the geological continuity between the known data and the unknown to produce a single best guess of the unknown. Kriging is also known as optimal interpolation technique, which facilitate to generate best linear unbiased estimation of each horizon. The idea is to construct a numerical model of the lithofacies and rock properties that honor available data and further integrate with interpreting seismic sections, techtonostratigraphy chart with sea level curve (short term) and regional tectonics of the study area to find the structural and stratigraphic growth history of the NW Bonaparte Basin. By using kriging technique the models were built which help to estimate different parameters like horizons, facies, and porosities in the study area. The variograms were used to determine for identification of spatial relationship between data which help to find the depositional history of the North West (NW) Bonaparte Basin

  2. The geostatistical approach for structural and stratigraphic framework analysis of offshore NW Bonaparte Basin, Australia

    Energy Technology Data Exchange (ETDEWEB)

    Wahid, Ali, E-mail: ali.wahid@live.com; Salim, Ahmed Mohamed Ahmed, E-mail: mohamed.salim@petronas.com.my; Yusoff, Wan Ismail Wan, E-mail: wanismail-wanyusoff@petronas.com.my [Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 32610 Tronoh, Perak (Malaysia); Gaafar, Gamal Ragab, E-mail: gaafargr@gmail.com [Petroleum Engineering Division, PETRONAS Carigali Sdn Bhd, Kuala Lumpur (Malaysia)

    2016-02-01

    Geostatistics or statistical approach is based on the studies of temporal and spatial trend, which depend upon spatial relationships to model known information of variable(s) at unsampled locations. The statistical technique known as kriging was used for petrophycial and facies analysis, which help to assume spatial relationship to model the geological continuity between the known data and the unknown to produce a single best guess of the unknown. Kriging is also known as optimal interpolation technique, which facilitate to generate best linear unbiased estimation of each horizon. The idea is to construct a numerical model of the lithofacies and rock properties that honor available data and further integrate with interpreting seismic sections, techtonostratigraphy chart with sea level curve (short term) and regional tectonics of the study area to find the structural and stratigraphic growth history of the NW Bonaparte Basin. By using kriging technique the models were built which help to estimate different parameters like horizons, facies, and porosities in the study area. The variograms were used to determine for identification of spatial relationship between data which help to find the depositional history of the North West (NW) Bonaparte Basin.

  3. Models for waste life cycle assessment: Review of technical assumptions

    DEFF Research Database (Denmark)

    Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky

    2010-01-01

    waste LCA models. This review infers that some of the differences in waste LCA models are inherent to the time they were developed. It is expected that models developed later, benefit from past modelling assumptions and knowledge and issues. Models developed in different countries furthermore rely...

  4. Does Artificial Neural Network Support Connectivism's Assumptions?

    Science.gov (United States)

    AlDahdouh, Alaa A.

    2017-01-01

    Connectivism was presented as a learning theory for the digital age and connectivists claim that recent developments in Artificial Intelligence (AI) and, more specifically, Artificial Neural Network (ANN) support their assumptions of knowledge connectivity. Yet, very little has been done to investigate this brave allegation. Does the advancement…

  5. Exploring five common assumptions on Attention Deficit Hyperactivity Disorder

    NARCIS (Netherlands)

    Batstra, Laura; Nieweg, Edo H.; Hadders-Algra, Mijna

    The number of children diagnosed with attention deficit hyperactivity disorder (ADHD) and treated with medication is steadily increasing. The aim of this paper was to critically discuss five debatable assumptions on ADHD that may explain these trends to some extent. These are that ADHD (i) causes

  6. Judgment: Deductive Logic and Assumption Recognition: Grades 7-12.

    Science.gov (United States)

    Instructional Objectives Exchange, Los Angeles, CA.

    This collection of objectives and related measures deals with one side of judgment: deductive logic and assumption recognition. They are suggestive of students' ability to make judgments based on logical analysis rather than comprehensive indices of overall capacity for judgment. They include Conditional Reasoning Index, Class Reasoning Index,…

  7. Child Development Knowledge and Teacher Preparation: Confronting Assumptions.

    Science.gov (United States)

    Katz, Lilian G.

    This paper questions the widely held assumption that acquiring knowledge of child development is an essential part of teacher preparation and teaching competence, especially among teachers of young children. After discussing the influence of culture, parenting style, and teaching style on developmental expectations and outcomes, the paper asserts…

  8. Observing gravitational-wave transient GW150914 with minimal assumptions

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwa, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. C.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brocki, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderon Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chatterji, S.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. E.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, M.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, A.L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; DeRosa, R. T.; Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M.G.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. R.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritsche, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; de Haas, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinder, I.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijhunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Laguna, P.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, R.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mende, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Page, J.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prolchorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, P.S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shithriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlhruch, H.; Vajente, G.; Valdes, G.; Van Bakel, N.; Van Beuzekom, Martin; Van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, R. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-01-01

    The gravitational-wave signal GW150914 was first identified on September 14, 2015, by searches for short-duration gravitational-wave transients. These searches identify time-correlated transients in multiple detectors with minimal assumptions about the signal morphology, allowing them to be

  9. Origins and Traditions in Comparative Education: Challenging Some Assumptions

    Science.gov (United States)

    Manzon, Maria

    2018-01-01

    This article questions some of our assumptions about the history of comparative education. It explores new scholarship on key actors and ways of knowing in the field. Building on the theory of the social constructedness of the field of comparative education, the paper elucidates how power shapes our scholarly histories and identities.

  10. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    Science.gov (United States)

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics.

  11. Relaxing the zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Haegeman, Bart; Etienne, Rampal S.

    2008-01-01

    The zero-sum assumption is one of the ingredients of the standard neutral model of biodiversity by Hubbell. It states that the community is saturated all the time, which in this model means that the total number of individuals in the community is constant over time, and therefore introduces a

  12. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    We model examples like reliable bit transmission and sequence transmission protocols in this framework and discuss how assumption-commitment structure facilitates compositional design of such protocols. We prove a decomposition theorem which states that every protocol specified globally as a finite state system can ...

  13. Seven Assumptions of a Solution-Focused Conversational Leader.

    Science.gov (United States)

    Paull, Robert C.; McGrevin, Carol Z.

    1996-01-01

    Effective psychologists and school leaders know how to manage conversations to help clients or stakeholders move toward solutions. This article presents the assumptions of solution-focused brief therapy in a school leadership context. Key components are focusing on solutions, finding exceptions, identifying changes, starting small, listening to…

  14. The Metatheoretical Assumptions of Literacy Engagement: A Preliminary Centennial History

    Science.gov (United States)

    Hruby, George G.; Burns, Leslie D.; Botzakis, Stergios; Groenke, Susan L.; Hall, Leigh A.; Laughter, Judson; Allington, Richard L.

    2016-01-01

    In this review of literacy education research in North America over the past century, the authors examined the historical succession of theoretical frameworks on students' active participation in their own literacy learning, and in particular the metatheoretical assumptions that justify those frameworks. The authors used "motivation" and…

  15. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    Science.gov (United States)

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  16. Posttraumatic Growth and Shattered World Assumptions Among Ex-POWs

    DEFF Research Database (Denmark)

    Lahav, Y.; Bellin, Elisheva S.; Solomon, Z.

    2016-01-01

    world assumptions (WAs) and that the co-occurrence of high PTG and negative WAs among trauma survivors reflects reconstruction of an integrative belief system. The present study aimed to test these claims by investigating, for the first time, the mediating role of dissociation in the relation between...

  17. Deep Borehole Field Test Requirements and Controlled Assumptions.

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  18. 7 CFR 1980.476 - Transfer and assumptions.

    Science.gov (United States)

    2010-01-01

    ...) PROGRAM REGULATIONS (CONTINUED) GENERAL Business and Industrial Loan Program § 1980.476 Transfer and... give to secure the debt, will be adequate to secure the balance of the total guaranteed loan owed, plus... assumption provisions if the guaranteed loan debt balance is within his/her individual loan approval...

  19. Bion, basic assumptions, and violence: a corrective reappraisal.

    Science.gov (United States)

    Roth, Bennett

    2013-10-01

    Group psychoanalytic theory rests on many of the same psychoanalytic assumptions as individual psychoanalytic theory but has been slow in developing its own language and unique understanding of conflict within the group, as many group phenomena are not the same as individual psychic events. Regressive fantasies and alliances within and to the group are determined by group composition and the interaction of fantasies among members and leader. Bion's useful but incomplete early abstract formulation of psychic regression in groups was the initial attempt to move beyond Freud's largely sociological view. This paper explores some of the origins of Bion's neglect of murderous violence in groups as a result of his own experiences in the first European war. In the following, I present evidence for the existence of a violent basic assumption and offer evidence as to Bion's avoidance of murderous and violent acts.

  20. The sufficiency assumption of the reasoned approach to action

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2015-12-01

    Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.

  1. Data-driven smooth tests of the proportional hazards assumption

    Czech Academy of Sciences Publication Activity Database

    Kraus, David

    2007-01-01

    Roč. 13, č. 1 (2007), s. 1-16 ISSN 1380-7870 R&D Projects: GA AV ČR(CZ) IAA101120604; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * Neyman's smooth test * proportional hazards assumption * Schwarz's selection rule Subject RIV: BA - General Mathematics Impact factor: 0.491, year: 2007

  2. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. About tests of the "simplifying" assumption for conditional copulas

    OpenAIRE

    Derumigny, Alexis; Fermanian, Jean-David

    2016-01-01

    We discuss the so-called “simplifying assumption” of conditional copulas in a general framework. We introduce several tests of the latter assumption for non- and semiparametric copula models. Some related test procedures based on conditioning subsets instead of point-wise events are proposed. The limiting distributions of such test statistics under the null are approximated by several bootstrap schemes, most of them being new. We prove the validity of a particular semiparametric bootstrap sch...

  4. Assumptions behind size-based ecosystem models are realistic

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.

    2016-01-01

    A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed by Fro...... that there is indeed a constructive role for a wide suite of ecosystem models to evaluate fishing strategies in an ecosystem context...

  5. Bank stress testing under different balance sheet assumptions

    OpenAIRE

    Busch, Ramona; Drescher, Christian; Memmel, Christoph

    2017-01-01

    Using unique supervisory survey data on the impact of a hypothetical interest rate shock on German banks, we analyse price and quantity effects on banks' net interest margin components under different balance sheet assumptions. In the first year, the cross-sectional variation of banks' simulated price effect is nearly eight times as large as the one of the simulated quantity effect. After five years, however, the importance of both effects converges. Large banks adjust their balance sheets mo...

  6. Two-point versus multiple-point geostatistics: the ability of geostatistical methods to capture complex geobodies and their facies associations—an application to a channelized carbonate reservoir, southwest Iran

    Science.gov (United States)

    Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Khoshdel, Hossein

    2014-12-01

    Facies models try to explain facies architectures which have a primary control on the subsurface heterogeneities and the fluid flow characteristics of a given reservoir. In the process of facies modeling, geostatistical methods are implemented to integrate different sources of data into a consistent model. The facies models should describe facies interactions; the shape and geometry of the geobodies as they occur in reality. Two distinct categories of geostatistical techniques are two-point and multiple-point (geo) statistics (MPS). In this study, both of the aforementioned categories were applied to generate facies models. A sequential indicator simulation (SIS) and a truncated Gaussian simulation (TGS) represented two-point geostatistical methods, and a single normal equation simulation (SNESIM) selected as an MPS simulation representative. The dataset from an extremely channelized carbonate reservoir located in southwest Iran was applied to these algorithms to analyze their performance in reproducing complex curvilinear geobodies. The SNESIM algorithm needs consistent training images (TI) in which all possible facies architectures that are present in the area are included. The TI model was founded on the data acquired from modern occurrences. These analogies delivered vital information about the possible channel geometries and facies classes that are typically present in those similar environments. The MPS results were conditioned to both soft and hard data. Soft facies probabilities were acquired from a neural network workflow. In this workflow, seismic-derived attributes were implemented as the input data. Furthermore, MPS realizations were conditioned to hard data to guarantee the exact positioning and continuity of the channel bodies. A geobody extraction workflow was implemented to extract the most certain parts of the channel bodies from the seismic data. These extracted parts of the channel bodies were applied to the simulation workflow as hard data. This

  7. Geostatistical investigation into the temporal evolution of spatial structure in a shallow water table

    Directory of Open Access Journals (Sweden)

    S. W. Lyon

    2006-01-01

    Full Text Available Shallow water tables near-streams often lead to saturated, overland flow generating areas in catchments in humid climates. While these saturated areas are assumed to be principal biogeochemical hot-spots and important for issues such as non-point pollution sources, the spatial and temporal behavior of shallow water tables, and associated saturated areas, is not completely understood. This study demonstrates how geostatistical methods can be used to characterize the spatial and temporal variation of the shallow water table for the near-stream region. Event-based and seasonal changes in the spatial structure of the shallow water table, which influences the spatial pattern of surface saturation and related runoff generation, can be identified and used in conjunction to characterize the hydrology of an area. This is accomplished through semivariogram analysis and indicator kriging to produce maps combining soft data (i.e., proxy information to the variable of interest representing general shallow water table patterns with hard data (i.e., actual measurements that represent variation in the spatial structure of the shallow water table per rainfall event. The area used was a hillslope in the Catskill Mountains region of New York State. The shallow water table was monitored for a 120 m×180 m near-stream region at 44 sampling locations on 15-min intervals. Outflow of the area was measured at the same time interval. These data were analyzed at a short time interval (15 min and at a long time interval (months to characterize the changes in the hydrologic behavior of the hillslope. Indicator semivariograms based on binary-transformed ground water table data (i.e., 1 if exceeding the time-variable median depth to water table and 0 if not were created for both short and long time intervals. For the short time interval, the indicator semivariograms showed a high degree of spatial structure in the shallow water table for the spring, with increased range

  8. Geostatistical modeling of the spatial distribution of sediment oxygen demand within a Coastal Plain blackwater watershed.

    Science.gov (United States)

    Todd, M Jason; Lowrance, R Richard; Goovaerts, Pierre; Vellidis, George; Pringle, Catherine M

    2010-10-15

    Blackwater streams are found throughout the Coastal Plain of the southeastern United States and are characterized by a series of instream floodplain swamps that play a critical role in determining the water quality of these systems. Within the state of Georgia, many of these streams are listed in violation of the state's dissolved oxygen (DO) standard. Previous work has shown that sediment oxygen demand (SOD) is elevated in instream floodplain swamps and due to these areas of intense oxygen demand, these locations play a major role in determining the oxygen balance of the watershed as a whole. This work also showed SOD rates to be positively correlated with the concentration of total organic carbon. This study builds on previous work by using geostatistics and Sequential Gaussian Simulation to investigate the patchiness and distribution of total organic carbon (TOC) at the reach scale. This was achieved by interpolating TOC observations and simulated SOD rates based on a linear regression. Additionally, this study identifies areas within the stream system prone to high SOD at representative 3rd and 5th order locations. Results show that SOD was spatially correlated with the differences in distribution of TOC at both locations and that these differences in distribution are likely a result of the differing hydrologic regime and watershed position. Mapping of floodplain soils at the watershed scale shows that areas of organic sediment are widespread and become more prevalent in higher order streams. DO dynamics within blackwater systems are a complicated mix of natural and anthropogenic influences, but this paper illustrates the importance of instream swamps in enhancing SOD at the watershed scale. Moreover, our study illustrates the influence of instream swamps on oxygen demand while providing support that many of these systems are naturally low in DO.

  9. A connectionist-geostatistical approach for classification of deformation types in ice surfaces

    Science.gov (United States)

    Goetz-Weiss, L. R.; Herzfeld, U. C.; Hale, R. G.; Hunke, E. C.; Bobeck, J.

    2014-12-01

    Deformation is a class of highly non-linear geophysical processes from which one can infer other geophysical variables in a dynamical system. For example, in an ice-dynamic model, deformation is related to velocity, basal sliding, surface elevation changes, and the stress field at the surface as well as internal to a glacier. While many of these variables cannot be observed, deformation state can be an observable variable, because deformation in glaciers (once a viscosity threshold is exceeded) manifests itself in crevasses.Given the amount of information that can be inferred from observing surface deformation, an automated method for classifying surface imagery becomes increasingly desirable. In this paper a Neural Network is used to recognize classes of crevasse types over the Bering Bagley Glacier System (BBGS) during a surge (2011-2013-?). A surge is a spatially and temporally highly variable and rapid acceleration of the glacier. Therefore, many different crevasse types occur in a short time frame and in close proximity, and these crevasse fields hold information on the geophysical processes of the surge.The connectionist-geostatistical approach uses directional experimental (discrete) variograms to parameterize images into a form that the Neural Network can recognize. Recognizing that each surge wave results in different crevasse types and that environmental conditions affect the appearance in imagery, we have developed a semi-automated pre-training software to adapt the Neural Net to chaining conditions.The method is applied to airborne and satellite imagery to classify surge crevasses from the BBGS surge. This method works well for classifying spatially repetitive images such as the crevasses over Bering Glacier. We expand the network for less repetitive images in order to analyze imagery collected over the Arctic sea ice, to assess the percentage of deformed ice for model calibration.

  10. UNCERT: geostatistics, uncertainty analysis and visualization software applied to groundwater flow and contaminant transport modeling

    International Nuclear Information System (INIS)

    Wingle, W.L.; Poeter, E.P.; McKenna, S.A.

    1999-01-01

    UNCERT is a 2D and 3D geostatistics, uncertainty analysis and visualization software package applied to ground water flow and contaminant transport modeling. It is a collection of modules that provides tools for linear regression, univariate statistics, semivariogram analysis, inverse-distance gridding, trend-surface analysis, simple and ordinary kriging and discrete conditional indicator simulation. Graphical user interfaces for MODFLOW and MT3D, ground water flow and contaminant transport models, are provided for streamlined data input and result analysis. Visualization tools are included for displaying data input and output. These include, but are not limited to, 2D and 3D scatter plots, histograms, box and whisker plots, 2D contour maps, surface renderings of 2D gridded data and 3D views of gridded data. By design, UNCERT's graphical user interface and visualization tools facilitate model design and analysis. There are few built in restrictions on data set sizes and each module (with two exceptions) can be run in either graphical or batch mode. UNCERT is in the public domain and is available from the World Wide Web with complete on-line and printable (PDF) documentation. UNCERT is written in ANSI-C with a small amount of FORTRAN77, for UNIX workstations running X-Windows and Motif (or Lesstif). This article discusses the features of each module and demonstrates how they can be used individually and in combination. The tools are applicable to a wide range of fields and are currently used by researchers in the ground water, mining, mathematics, chemistry and geophysics, to name a few disciplines. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  11. GEOSTATISTICAL BASED SUSCEPTIBILITY MAPPING OF SOIL EROSION AND OPTIMIZATION OF ITS CAUSATIVE FACTORS: A CONCEPTUAL FRAMEWORK

    Directory of Open Access Journals (Sweden)

    ABDULKADIR T. SHOLAGBERU

    2017-11-01

    Full Text Available Soil erosion hazard is the second biggest environmental challenges after population growth causing land degradation, desertification and water deterioration. Its impacts on watersheds include loss of soil nutrients, reduced reservoir capacity through siltation which may lead to flood risk, landslide, high water turbidity, etc. These problems become more pronounced in human altered mountainous areas through intensive agricultural activities, deforestation and increased urbanization among others. However, due to challenging nature of soil erosion management, there is great interest in assessing its spatial distribution and susceptibility levels. This study is thus intend to review the recent literatures and develop a novel framework for soil erosion susceptibility mapping using geostatistical based support vector machine (SVM, remote sensing and GIS techniques. The conceptual framework is to bridge the identified knowledge gaps in the area of causative factors’ (CFs selection. In this research, RUSLE model, field studies and the existing soil erosion maps for the study area will be integrated for the development of inventory map. Spatial data such as Landsat 8, digital soil and geological maps, digital elevation model and hydrological data shall be processed for the extraction of erosion CFs. GISbased SVM techniques will be adopted for the establishment of spatial relationships between soil erosion and its CFs, and subsequently for the development of erosion susceptibility maps. The results of this study include evaluation of predictive capability of GIS-based SVM in soil erosion mapping and identification of the most influential CFs for erosion susceptibility assessment. This study will serve as a guide to watershed planners and to alleviate soil erosion challenges and its related hazards.

  12. A Geostatistical Toolset for Reconstructing Louisiana's Coastal Stratigraphy using Subsurface Boring and Cone Penetrometer Test Data

    Science.gov (United States)

    Li, A.; Tsai, F. T. C.; Jafari, N.; Chen, Q. J.; Bentley, S. J.

    2017-12-01

    A vast area of river deltaic wetlands stretches across southern Louisiana coast. The wetlands are suffering from a high rate of land loss, which increasingly threats coastal community and energy infrastructure. A regional stratigraphic framework of the delta plain is now imperative to answer scientific questions (such as how the delta plain grows and decays?) and to provide information to coastal protection and restoration projects (such as marsh creation and construction of levees and floodwalls). Through years, subsurface investigations in Louisiana have been conducted by state and federal agencies (Louisiana Department of Natural Resources, United States Geological Survey, United States Army Corps of Engineers, etc.), research institutes (Louisiana Geological Survey, LSU Coastal Studies Institute, etc.), engineering firms, and oil-gas companies. This has resulted in the availability of various types of data, including geological, geotechnical, and geophysical data. However, it is challenging to integrate different types of data and construct three-dimensional stratigraphy models in regional scale. In this study, a set of geostatistical methods were used to tackle this problem. An ordinary kriging method was used to regionalize continuous data, such as grain size, water content, liquid limit, plasticity index, and cone penetrometer tests (CPTs). Indicator kriging and multiple indicator kriging methods were used to regionalize categorized data, such as soil classification. A compositional kriging method was used to regionalize compositional data, such as soil composition (fractions of sand, silt and clay). Stratigraphy models were constructed for three cases in the coastal zone: (1) Inner Harbor Navigation Canal (IHNC) area: soil classification and soil behavior type (SBT) stratigraphies were constructed using ordinary kriging; (2) Middle Barataria Bay area: a soil classification stratigraphy was constructed using multiple indicator kriging; (3) Lower Barataria

  13. Development of A Bayesian Geostatistical Data Assimilation Method and Application to the Hanford 300 Area

    Science.gov (United States)

    Murakami, Haruko

    Probabilistic risk assessment of groundwater contamination requires us to incorporate large and diverse datasets at the site into the stochastic modeling of flow and transport for prediction. In quantifying the uncertainty in our predictions, we must not only combine the best estimates of the parameters based on each dataset, but also integrate the uncertainty associated with each dataset caused by measurement errors and limited number of measurements. This dissertation presents a Bayesian geostatistical data assimilation method that integrates various types of field data for characterizing heterogeneous hydrological properties. It quantifies the parameter uncertainty as a posterior distribution conditioned on all the datasets, which can be directly used in stochastic simulations to compute possible outcomes of flow and transport processes. The goal of this framework is to remove the discontinuity between data analysis and prediction. Such a direct connection between data and prediction also makes it possible to evaluate the worth of each dataset or combined worth of multiple datasets. The synthetic studies described here confirm that the data assimilation method introduced in this dissertation successfully captures the true parameter values and predicted values within the posterior distribution. The shape of the inferred posterior distributions from the method indicates the importance of estimating the entire distribution in fully accounting for parameter uncertainty. The method is then applied to integrate multiple types of datasets at the Hanford 300 Area for characterizing a three-dimensional heterogeneous hydraulic conductivity field. Comparing the results based on the different numbers or combinations of datasets shows that increasing data do not always contribute in a straightforward way to improving the posterior distribution: increasing numbers of the same data type would not necessarily be beneficial above a certain number, and also the combined effect of

  14. Analysis of vadose zone tritium transport from an underground storage tank release using numerical modeling and geostatistics

    Energy Technology Data Exchange (ETDEWEB)

    Lee, K.H.

    1997-09-01

    Numerical and geostatistical analyses show that the artificial smoothing effect of kriging removes high permeability flow paths from hydrogeologic data sets, reducing simulated contaminant transport rates in heterogeneous vadose zone systems. therefore, kriging alone is not recommended for estimating the spatial distribution of soil hydraulic properties for contaminant transport analysis at vadose zone sites. Vadose zone transport if modeled more effectively by combining kriging with stochastic simulation to better represent the high degree of spatial variability usually found in the hydraulic properties of field soils. However, kriging is a viable technique for estimating the initial mass distribution of contaminants in the subsurface.

  15. Geostatistical Methods For Determination of Roughness, Topography, And Changes of Antarctic Ice Streams From SAR And Radar Altimeter Data

    Science.gov (United States)

    Herzfeld, Ute C.

    2002-01-01

    The central objective of this project has been the development of geostatistical methods fro mapping elevation and ice surface characteristics from satellite radar altimeter (RA) and Syntheitc Aperture Radar (SAR) data. The main results are an Atlas of elevation maps of Antarctica, from GEOSAT RA data and an Atlas from ERS-1 RA data, including a total of about 200 maps with 3 km grid resolution. Maps and digital terrain models are applied to monitor and study changes in Antarctic ice streams and glaciers, including Lambert Glacier/Amery Ice Shelf, Mertz and Ninnis Glaciers, Jutulstraumen Glacier, Fimbul Ice Shelf, Slessor Glacier, Williamson Glacier and others.

  16. Accounting for non-stationary variance in geostatistical mapping of soil properties

    NARCIS (Netherlands)

    Wadoux, Alexandre M.J.C.; Brus, Dick J.; Heuvelink, Gerard B.M.

    2018-01-01

    Simple and ordinary kriging assume a constant mean and variance of the soil variable of interest. This assumption is often implausible because the mean and/or variance are linked to terrain attributes, parent material or other soil forming factors. In kriging with external drift (KED)

  17. THE COMPLEX OF ASSUMPTION CATHEDRAL OF THE ASTRAKHAN KREMLIN

    Directory of Open Access Journals (Sweden)

    Savenkova Aleksandra Igorevna

    2016-08-01

    Full Text Available This article is devoted to an architectural and historical analysis of the constructions forming a complex of Assumption Cathedral of the Astrakhan Kremlin, which earlier hasn’t been considered as a subject of special research. Basing on the archival sources, photographic materials, publications and on-site investigations of monuments, the creation history of the complete architectural complex sustained in one style of the Muscovite baroque, unique in its composite construction, is considered. Its interpretation in the all-Russian architectural context is offered. Typological features of single constructions come to light. The typology of the Prechistinsky bell tower has an untypical architectural solution - “hexagonal structure on octagonal and quadrangular structures”. The way of connecting the building of the Cathedral and the chambers by the passage was characteristic of monastic constructions and was exclusively seldom in kremlins, farmsteads and ensembles of city cathedrals. The composite scheme of the Assumption Cathedral includes the Lobnoye Mesto (“the Place of Execution” located on an axis from the West, it is connected with the main building by a quarter-turn with landing. The only prototype of the structure is a Lobnoye Mesto on the Red Square in Moscow. In the article the version about the emergence of the Place of Execution on the basis of earlier existing construction - a tower “the Peal” which is repeatedly mentioned in written sources in connection with S. Razin’s revolt is considered. The metropolitan Sampson, trying to keep the value of the Astrakhan metropolitanate, builds the Assumption Cathedral and the Place of Execution directly appealing to a capital prototype to emphasize the continuity and close connection with Moscow.

  18. HYPROLOG: A New Logic Programming Language with Assumptions and Abduction

    DEFF Research Database (Denmark)

    Christiansen, Henning; Dahl, Veronica

    2005-01-01

    . The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...... with the grammar notation provided by the underlying Prolog system. An operational semantics is given which complies with standard declarative semantics for the ``pure'' sublanguages, while for the full HYPROLOG language, it must be taken as definition. The implementation is straightforward and seems to provide...

  19. Radiation hormesis and the linear-no-threshold assumption

    CERN Document Server

    Sanders, Charles L

    2009-01-01

    Current radiation protection standards are based upon the application of the linear no-threshold (LNT) assumption, which considers that even very low doses of ionizing radiation can cause cancer. The radiation hormesis hypothesis, by contrast, proposes that low-dose ionizing radiation is beneficial. In this book, the author examines all facets of radiation hormesis in detail, including the history of the concept and mechanisms, and presents comprehensive, up-to-date reviews for major cancer types. It is explained how low-dose radiation can in fact decrease all-cause and all-cancer mortality an

  20. The extended evolutionary synthesis: its structure, assumptions and predictions.

    Science.gov (United States)

    Laland, Kevin N; Uller, Tobias; Feldman, Marcus W; Sterelny, Kim; Müller, Gerd B; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-08-22

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the 'extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism-environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. © 2015 The Author(s).

  1. Halo-Independent Direct Detection Analyses Without Mass Assumptions

    CERN Document Server

    Anderson, Adam J.; Kahn, Yonatan; McCullough, Matthew

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the $m_\\chi-\\sigma_n$ plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the $v_{min}-\\tilde{g}$ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from $v_{min}$ to nuclear recoil momentum ($p_R$), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call $\\tilde{h}(p_R)$. The entire family of conventional halo-independent $\\tilde{g}(v_{min})$ plots for all DM masses are directly found from the single $\\tilde{h}(p_R)$ plot through a simple re...

  2. Basic concepts and assumptions behind the new ICRP recommendations

    International Nuclear Information System (INIS)

    Lindell, B.

    1979-01-01

    A review is given of some of the basic concepts and assumptions behind the current recommendations by the International Commission on Radiological Protection in ICRP Publications 26 and 28, which form the basis for the revision of the Basic Safety Standards jointly undertaken by IAEA, ILO, NEA and WHO. Special attention is given to the assumption of a linear, non-threshold dose-response relationship for stochastic radiation effects such as cancer and hereditary harm. The three basic principles of protection are discussed: justification of practice, optimization of protection and individual risk limitation. In the new ICRP recommendations particular emphasis is given to the principle of keeping all radiation doses as low as is reasonably achievable. A consequence of this is that the ICRP dose limits are now given as boundary conditions for the justification and optimization procedures rather than as values that should be used for purposes of planning and design. The fractional increase in total risk at various ages after continuous exposure near the dose limits is given as an illustration. The need for taking other sources, present and future, into account when applying the dose limits leads to the use of the commitment concept. This is briefly discussed as well as the new quantity, the effective dose equivalent, introduced by ICRP. (author)

  3. The contour method cutting assumption: error minimization and correction

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  4. The extended evolutionary synthesis: its structure, assumptions and predictions

    Science.gov (United States)

    Laland, Kevin N.; Uller, Tobias; Feldman, Marcus W.; Sterelny, Kim; Müller, Gerd B.; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-01-01

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the ‘extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism–environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. PMID:26246559

  5. DDH-like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike

    2011-01-01

    We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of F_{q} ``in the exponent'' of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring R_f= \\F...... DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups...... and that in fact the problems become harder with increasing d and hence form an infinite hierarchy. We show that hardness of VDDH implies CCA-secure encryption, efficient Naor-Reingold style pseudorandom functions, and auxiliary input secure encryption, a strong form of leakage resilience. This can be seen...

  6. DDH-Like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike

    2012-01-01

    and security proof but get better security and moreover, the amortized complexity (e.g, computation per encrypted bit) is the same as when using DDH. We also show that d-DDH, just like DDH, is easy in bilinear groups. We therefore suggest a different type of assumption, the d-vector DDH problems (d......We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of $\\mathbb{F}_{q}$ “in the exponent” of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring $R......-VDDH), which are based on f(X) = Xd, but with a twist to avoid problems with reducible polynomials. We show in the generic group model that d-VDDH is hard in bilinear groups and that the problems become harder with increasing d. We show that hardness of d-VDDH implies CCA-secure encryption, efficient Naor...

  7. Estimation of cold extremes and the identical distribution assumption

    Science.gov (United States)

    Parey, Sylvie

    2016-04-01

    Extreme, generally not observed, values of meteorological (or other) hazards are estimated by use of observed time series and application of the statistical extreme value theory. This theory is based on the essential assumption that the events are independent and identically distributed. This assumption is generally not verified for meteorological hazards, firstly because these phenomena are seasonal, and secondly because climate change may induce temporal trends. These issues can be dealt with, by selecting the season of occurrence or handling trends in the extreme distribution parameters for example. When recently updating extreme cold temperatures, we faced different rather new difficulties: the threshold choice, when applying the Peak Over Threshold (POT) approach happened to be exceptionally difficult, and when applying block maxima, different block sizes could lead to significantly different return levels. A more detailed analysis of the exceedances of different cold thresholds showed that when the threshold becomes more extreme, the exceedances are not identically distributed across the years. This behaviour could have been related to the preferred phase of the North Atlantic Oscillation (NAO) during each winter, and the return level estimation has then been based on a sub-sampling between negative and positive NAO winters. The approach and the return level estimation from the sub-samples will be illustrated with an example.

  8. Economic Growth Assumptions in Climate and Energy Policy

    Directory of Open Access Journals (Sweden)

    Nir Y. Krakauer

    2014-03-01

    Full Text Available The assumption that the economic growth seen in recent decades will continue has dominated the discussion of future greenhouse gas emissions and the mitigation of and adaptation to climate change. Given that long-term economic growth is uncertain, the impacts of a wide range of growth trajectories should be considered. In particular, slower economic growth would imply that future generations will be relatively less able to invest in emissions controls or adapt to the detrimental impacts of climate change. Taking into consideration the possibility of economic slowdown therefore heightens the urgency of reducing greenhouse gas emissions now by moving to renewable energy sources, even if this incurs short-term economic cost. I quantify this counterintuitive impact of economic growth assumptions on present-day policy decisions in a simple global economy-climate model (Dynamic Integrated model of Climate and the Economy (DICE. In DICE, slow future growth increases the economically optimal present-day carbon tax rate and the utility of taxing carbon emissions, although the magnitude of the increase is sensitive to model parameters, including the rate of social time preference and the elasticity of the marginal utility of consumption. Future scenario development should specifically include low-growth scenarios, and the possibility of low-growth economic trajectories should be taken into account in climate policy analyses.

  9. Bogen's Critique of Linear-No-Threshold Default Assumptions.

    Science.gov (United States)

    Crump, Kenny S

    2017-10-01

    In an article recently published in this journal, Bogen (1) concluded that an NRC committee's recommendations that default linear, nonthreshold (LNT) assumptions be applied to dose- response assessment for noncarcinogens and nonlinear mode of action carcinogens are not justified. Bogen criticized two arguments used by the committee for LNT: when any new dose adds to a background dose that explains background levels of risk (additivity to background or AB), or when there is substantial interindividual heterogeneity in susceptibility (SIH) in the exposed human population. Bogen showed by examples that SIH can be false. Herein is outlined a general proof that confirms Bogen's claim. However, it is also noted that SIH leads to a nonthreshold population distribution even if individual distributions all have thresholds, and that small changes to SIH assumptions can result in LNT. Bogen criticizes AB because it only applies when there is additivity to background, but offers no help in deciding when or how often AB holds. Bogen does not contradict the fact that AB can lead to LNT but notes that, even if low-dose linearity results, the response at higher doses may not be useful in predicting the amount of low-dose linearity. Although this is theoretically true, it seems reasonable to assume that generally there is some quantitative relationship between the low-dose slope and the slope suggested at higher doses. Several incorrect or misleading statements by Bogen are noted. © 2016 Society for Risk Analysis.

  10. Assessment of nitrate pollution in the Grand Morin aquifers (France): Combined use of geostatistics and physically based modeling

    Energy Technology Data Exchange (ETDEWEB)

    Flipo, Nicolas [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France)]. E-mail: nicolas.flipo@ensmp.fr; Jeannee, Nicolas [Geovariances, 49 bis, avenue Franklin Roosevelt, F-77212 Avon (France); Poulin, Michel [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France); Even, Stephanie [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France); Ledoux, Emmanuel [Centre de Geosciences, UMR Sisyphe, ENSMP, 35 rue Saint-Honore, F-77305 Fontainebleau (France)

    2007-03-15

    The objective of this work is to combine several approaches to better understand nitrate fate in the Grand Morin aquifers (2700 km{sup 2}), part of the Seine basin. CAWAQS results from the coupling of the hydrogeological model NEWSAM with the hydrodynamic and biogeochemical model of river PROSE. CAWAQS is coupled with the agronomic model STICS in order to simulate nitrate migration in basins. First, kriging provides a satisfactory representation of aquifer nitrate contamination from local observations, to set initial conditions for the physically based model. Then associated confidence intervals, derived from data using geostatistics, are used to validate CAWAQS results. Results and evaluation obtained from the combination of these approaches are given (period 1977-1988). Then CAWAQS is used to simulate nitrate fate for a 20-year period (1977-1996). The mean nitrate concentrations increase in aquifers is 0.09 mgN L{sup -1} yr{sup -1}, resulting from an average infiltration flux of 3500 kgN.km{sup -2} yr{sup -1}. - Combined use of geostatistics and physically based modeling allows assessment of nitrate concentrations in aquifer systems.

  11. Assessment of nitrate pollution in the Grand Morin aquifers (France): Combined use of geostatistics and physically based modeling

    International Nuclear Information System (INIS)

    Flipo, Nicolas; Jeannee, Nicolas; Poulin, Michel; Even, Stephanie; Ledoux, Emmanuel

    2007-01-01

    The objective of this work is to combine several approaches to better understand nitrate fate in the Grand Morin aquifers (2700 km 2 ), part of the Seine basin. CAWAQS results from the coupling of the hydrogeological model NEWSAM with the hydrodynamic and biogeochemical model of river PROSE. CAWAQS is coupled with the agronomic model STICS in order to simulate nitrate migration in basins. First, kriging provides a satisfactory representation of aquifer nitrate contamination from local observations, to set initial conditions for the physically based model. Then associated confidence intervals, derived from data using geostatistics, are used to validate CAWAQS results. Results and evaluation obtained from the combination of these approaches are given (period 1977-1988). Then CAWAQS is used to simulate nitrate fate for a 20-year period (1977-1996). The mean nitrate concentrations increase in aquifers is 0.09 mgN L -1 yr -1 , resulting from an average infiltration flux of 3500 kgN.km -2 yr -1 . - Combined use of geostatistics and physically based modeling allows assessment of nitrate concentrations in aquifer systems

  12. DEM-based delineation for improving geostatistical interpolation of rainfall in mountainous region of Central Himalayas, India

    Science.gov (United States)

    Kumari, Madhuri; Singh, Chander Kumar; Bakimchandra, Oinam; Basistha, Ashoke

    2017-10-01

    In mountainous region with heterogeneous topography, the geostatistical modeling of the rainfall using global data set may not confirm to the intrinsic hypothesis of stationarity. This study was focused on improving the precision of the interpolated rainfall maps by spatial stratification in complex terrain. Predictions of the normal annual rainfall data were carried out by ordinary kriging, universal kriging, and co-kriging, using 80-point observations in the Indian Himalayas extending over an area of 53,484 km2. A two-step spatial clustering approach is proposed. In the first step, the study area was delineated into two regions namely lowland and upland based on the elevation derived from the digital elevation model. The delineation was based on the natural break classification method. In the next step, the rainfall data was clustered into two groups based on its spatial location in lowland or upland. The terrain ruggedness index (TRI) was incorporated as a co-variable in co-kriging interpolation algorithm. The precision of the kriged and co-kriged maps was assessed by two accuracy measures, root mean square error and Chatfield's percent better. It was observed that the stratification of rainfall data resulted in 5-20 % of increase in the performance efficiency of interpolation methods. Co-kriging outperformed the kriging models at annual and seasonal scale. The result illustrates that the stratification of the study area improves the stationarity characteristic of the point data, thus enhancing the precision of the interpolated rainfall maps derived using geostatistical methods.

  13. Geographical distribution of the annual mean radon concentrations in primary schools of Southern Serbia – application of geostatistical methods

    International Nuclear Information System (INIS)

    Bossew, P.; Žunić, Z.S.; Stojanovska, Z.; Tollefsen, T.; Carpentieri, C.; Veselinović, N.; Komatina, S.; Vaupotič, J.; Simović, R.D.; Antignani, S.; Bochicchio, F.

    2014-01-01

    Between 2008 and 2011 a survey of radon ( 222 Rn) was performed in schools of several districts of Southern Serbia. Some results have been published previously (Žunić et al., 2010; Carpentieri et al., 2011; Žunić et al., 2013). This article concentrates on the geographical distribution of the measured Rn concentrations. Applying geostatistical methods we generate “school radon maps” of expected concentrations and of estimated probabilities that a concentration threshold is exceeded. The resulting maps show a clearly structured spatial pattern which appears related to the geological background. In particular in areas with vulcanite and granitoid rocks, elevated radon (Rn) concentrations can be expected. The “school radon map” can therefore be considered as proxy to a map of the geogenic radon potential, and allows identification of radon-prone areas, i.e. areas in which higher Rn radon concentrations can be expected for natural reasons. It must be stressed that the “radon hazard”, or potential risk, estimated this way, has to be distinguished from the actual radon risk, which is a function of exposure. This in turn may require (depending on the target variable which is supposed to measure risk) considering demographic and sociological reality, i.e. population density, distribution of building styles and living habits. -- Highlights: • A map of Rn concentrations in primary schools of Southern Serbia. • Application of geostatistical methods. • Correlation with geology found. • Can serve as proxy to identify radon prone areas

  14. Geostatistical characterization of the Callovo-Oxfordian clay variability: from conventional and high resolution log data

    International Nuclear Information System (INIS)

    Lefranc, Marie

    2007-01-01

    Andra (National Radioactive Waste Management Agency) has conducted studies in its Meuse/Haute-Marne Underground Research Laboratory located at a depth of about 490 m in a 155-million-year-old argillaceous rock: the Callovo-Oxfordian argillite. The purpose of the present work is to obtain as much information as possible from high-resolution log data and to optimize their analysis to specify and characterize space-time variations of the argillites from the Meuse/Haute-Marne site and subsequently predict the evolution of argillite properties on a 250 km 2 zone around the underground laboratory (transposition zone). The aim is to outline a methodology to transform depth intervals into geological time intervals and thus to quantify precisely the sedimentation rate variation, estimate duration; for example the duration of bio-stratigraphical units or of hiatuses. The latter point is particularly important because a continuous time recording is often assumed in geological modelling. The spatial variations can be studied on various scales. First, well-to-well correlations are established between seven wells at different scales. Relative variations of the thickness are observed locally. Second, FMI (Full-bore Formation Micro-Imager, Schlumberger) data are studied in detail to extract as much information as possible. For example, the analysis of FMI images reveals a clear carbonate - clay inter-bedding which displays cycles. Third, geostatistical tools are used to study these cycles. The vario-graphic analysis of conventional log data shows one metre cycles. With FMI data, smaller periods can be detected. Variogram modelling and factorial kriging analysis suggest that three spatial periods exist. They vary vertically and laterally in the boreholes but cycle ratios are stable and similar to orbital-cycle ratios (Milankovitch cycles). The three periods correspond to eccentricity, obliquity and precession. Since the duration of these orbital cycles is known, depth intervals can

  15. Regional-scale geostatistical inverse modeling of North American CO2 fluxes: a synthetic data study

    Directory of Open Access Journals (Sweden)

    A. M. Michalak

    2010-07-01

    Full Text Available A series of synthetic data experiments is performed to investigate the ability of a regional atmospheric inversion to estimate grid-scale CO2 fluxes during the growing season over North America. The inversions are performed within a geostatistical framework without the use of any prior flux estimates or auxiliary variables, in order to focus on the atmospheric constraint provided by the nine towers collecting continuous, calibrated CO2 measurements in 2004. Using synthetic measurements and their associated concentration footprints, flux and model-data mismatch covariance parameters are first optimized, and then fluxes and their uncertainties are estimated at three different temporal resolutions. These temporal resolutions, which include a four-day average, a four-day-average diurnal cycle with 3-hourly increments, and 3-hourly fluxes, are chosen to help assess the impact of temporal aggregation errors on the estimated fluxes and covariance parameters. Estimating fluxes at a temporal resolution that can adjust the diurnal variability is found to be critical both for recovering covariance parameters directly from the atmospheric data, and for inferring accurate ecoregion-scale fluxes. Accounting for both spatial and temporal a priori covariance in the flux distribution is also found to be necessary for recovering accurate a posteriori uncertainty bounds on the estimated fluxes. Overall, the results suggest that even a fairly sparse network of 9 towers collecting continuous CO2 measurements across the continent, used with no auxiliary information or prior estimates of the flux distribution in time or space, can be used to infer relatively accurate monthly ecoregion scale CO2 surface fluxes over North America within estimated uncertainty bounds. Simulated random transport error is shown to decrease the quality of flux estimates in under-constrained areas at the ecoregion scale, although the uncertainty bounds remain realistic. While these synthetic

  16. Geostatistical modelling of soil-transmitted helminth infection in Cambodia: do socioeconomic factors improve predictions?

    Science.gov (United States)

    Karagiannis-Voules, Dimitrios-Alexios; Odermatt, Peter; Biedermann, Patricia; Khieu, Virak; Schär, Fabian; Muth, Sinuon; Utzinger, Jürg; Vounatsou, Penelope

    2015-01-01

    Soil-transmitted helminth infections are intimately connected with poverty. Yet, there is a paucity of using socioeconomic proxies in spatially explicit risk profiling. We compiled household-level socioeconomic data pertaining to sanitation, drinking-water, education and nutrition from readily available Demographic and Health Surveys, Multiple Indicator Cluster Surveys and World Health Surveys for Cambodia and aggregated the data at village level. We conducted a systematic review to identify parasitological surveys and made every effort possible to extract, georeference and upload the data in the open source Global Neglected Tropical Diseases database. Bayesian geostatistical models were employed to spatially align the village-aggregated socioeconomic predictors with the soil-transmitted helminth infection data. The risk of soil-transmitted helminth infection was predicted at a grid of 1×1km covering Cambodia. Additionally, two separate individual-level spatial analyses were carried out, for Takeo and Preah Vihear provinces, to assess and quantify the association between soil-transmitted helminth infection and socioeconomic indicators at an individual level. Overall, we obtained socioeconomic proxies from 1624 locations across the country. Surveys focussing on soil-transmitted helminth infections were extracted from 16 sources reporting data from 238 unique locations. We found that the risk of soil-transmitted helminth infection from 2000 onwards was considerably lower than in surveys conducted earlier. Population-adjusted prevalences for school-aged children from 2000 onwards were 28.7% for hookworm, 1.5% for Ascaris lumbricoides and 0.9% for Trichuris trichiura. Surprisingly, at the country-wide analyses, we did not find any significant association between soil-transmitted helminth infection and village-aggregated socioeconomic proxies. Based also on the individual-level analyses we conclude that socioeconomic proxies might not be good predictors at an

  17. Geochemical mapping in polluted floodplains using handheld XRF, geophysical imaging, and geostatistics

    Science.gov (United States)

    Hošek, Michal; Matys Grygar, Tomáš; Popelka, Jan; Kiss, Timea; Elznicová, Jitka; Faměra, Martin

    2017-04-01

    units. Those findings must, however, be checked by sediment examination and analysis in selected points. We processed the crucial characteristics obtained by geochemical mapping, namely depth of maximum pollution, amount of contamination, and lithology (Al/Si and Zr/Rb ratios), using geostatistics. Moreover, some parts of floodplain were dated by optically stimulated luminescence (OSL) which revealed, that recycling of top decimetres of floodplain fine fill (silts) in Boreček site has proceeded relatively recently (in decades and centuries) as compared to deeper lying coarser (sandy) strata (millennia). The results of geochemical mapping show complexity of pollution hotspots and need of their integrated interpretation. Key words: Dipole electromagneting profilling, electric resistivity tomography, floodplain contamination, geochemical mapping

  18. Forecasts: uncertain, inaccurate and biased?

    DEFF Research Database (Denmark)

    Nicolaisen, Morten Skou; Ambrasaite, Inga; Salling, Kim Bang

    2012-01-01

    Cost Benefit Analysis (CBA) is the dominating methodology for appraisal of transport infrastructure projects across the globe. In order to adequately assess the costs and benefits of such projects two types of forecasts are crucial to the validity of the appraisal. First are the forecasts of cons....... It is recommended that more attention is given to monitoring completed projects so future forecasts can benefit from better data availability through systematic ex-post evaluations, and an example of how to utilize such data in practice is presented....

  19. 78 FR 42009 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-07-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... assumptions--for paying plan benefits under terminating single-employer plans covered by title IV of the... assumptions are intended to reflect current conditions in the financial and annuity markets. Assumptions under...

  20. 78 FR 11093 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-02-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... assumptions--for paying plan benefits under terminating single-employer plans covered by title IV of the... assumptions are intended to reflect current conditions in the financial and annuity markets. Assumptions under...

  1. Spatial Variability and Geostatistical Prediction of Some Soil Hydraulic Coefficients of a Calcareous Soil

    Directory of Open Access Journals (Sweden)

    Ali Akbar Moosavi

    2017-02-01

    Full Text Available Introduction: Saturated hydraulic conductivity and the other hydraulic properties of soils are essential vital soil attributes that play role in the modeling of hydrological phenomena, designing irrigation-drainage systems, transportation of salts and chemical and biological pollutants within the soil. Measurement of these hydraulic properties needs some special instruments, expert technician, and are time consuming and expensive and due to their high temporal and spatial variability, a large number of measurements are needed. Nowadays, prediction of these attributes using the readily available soil data using pedotransfer functions or using the limited measurement with applying the geostatistical approaches has been receiving high attention. The study aimed to determine the spatial variability and prediction of saturated (Ks and near saturated (Kfs hydraulic conductivity, the power of Gardner equation (α, sorptivity (S, hydraulic diffusivity (D and matric flux potential (Фm of a calcareous soil. Material and Methods: The study was carried out on the soil series of Daneshkadeh located in the Bajgah Agricultural Experimental Station of Agricultural College, Shiraz University, Shiraz, Iran (1852 m above the mean sea level. This soil series with about 745 ha is a deep yellowish brow calcareous soil with textural classes of loam to clay. In the studied soil series 50 sampling locations with the sampling distances of 16, 8 , and 4 m were selected on the relatively regular sampling design. The saturated hydraulic conductivity (Ks, near saturated hydraulic conductivity (Kfs, the power of Gardner equation (α, sorptivity (S, hydraulic diffusivity (D and matric flux potential (Фm of the aforementioned sampling locations was determined using the Single Ring and Droplet methods. After, initial statistical processing, including a normality test of data, trend and stationary analysis of data, the semivariograms of each studied hydraulic attributes were

  2. New media in strategy – mapping assumptions in the field

    DEFF Research Database (Denmark)

    Gulbrandsen, Ib Tunby; Plesner, Ursula; Raviola, Elena

    2018-01-01

    There is plenty of empirical evidence for claiming that new media make a difference for how strategy is conceived and executed. Furthermore, there is a rapidly growing body of literature that engages with this theme, and offers recommendations regarding the appropriate strategic actions in relation...... to new media. By contrast, there is relatively little attention to the assumptions behind strategic thinking in relation to new media. This article reviews the most influential strategy journals, asking how new media are conceptualized. It is shown that strategy scholars have a tendency to place...... themselves in either a deterministic or at volontaristic camp with regards to technology. Strategy is portrayed as either determined by new media or a matter of rationally using them. Additionally, most articles portray the organization nicely delineated entity, where new media are relevant either...

  3. Commentary: profiling by appearance and assumption: beyond race and ethnicity.

    Science.gov (United States)

    Sapién, Robert E

    2010-04-01

    In this issue, Acquaviva and Mintz highlight issues regarding racial profiling in medicine and how it is perpetuated through medical education: Physicians are taught to make subjective determinations of race and/or ethnicity in case presentations, and such assumptions may affect patient care. The author of this commentary believes that the discussion should be broadened to include profiling on the basis of general appearance. The author reports personal experiences as someone who has profiled and been profiled by appearance-sometimes by skin color, sometimes by other physical attributes. In the two cases detailed here, patient care could have been affected had the author not become aware of his practices in such situations. The author advocates raising awareness of profiling in the broader sense through training.

  4. HYPROLOG: A New Logic Programming Language with Assumptions and Abduction

    DEFF Research Database (Denmark)

    Christiansen, Henning; Dahl, Veronica

    2005-01-01

    . The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...... with the grammar notation provided by the underlying Prolog system. An operational semantics is given which complies with standard declarative semantics for the ``pure'' sublanguages, while for the full HYPROLOG language, it must be taken as definition. The implementation is straightforward and seems to provide...... for abduction, the most efficient of known implementations; the price, however, is a limited use of negations. The main difference wrt.\\ previous implementations of abduction is that we avoid any level of metainterpretation by having Prolog execute the deductive steps directly and by treating abducibles (and...

  5. Experimental assessment of unvalidated assumptions in classical plasticity theory.

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.

    2009-01-01

    This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

  6. Deconstructing Community for Conservation: Why Simple Assumptions are Not Sufficient.

    Science.gov (United States)

    Waylen, Kerry Ann; Fischer, Anke; McGowan, Philip J K; Milner-Gulland, E J

    2013-01-01

    Many conservation policies advocate engagement with local people, but conservation practice has sometimes been criticised for a simplistic understanding of communities and social context. To counter this, this paper explores social structuring and its influences on conservation-related behaviours at the site of a conservation intervention near Pipar forest, within the Seti Khola valley, Nepal. Qualitative and quantitative data from questionnaires and Rapid Rural Appraisal demonstrate how links between groups directly and indirectly influence behaviours of conservation relevance (including existing and potential resource-use and proconservation activities). For low-status groups the harvesting of resources can be driven by others' preference for wild foods, whilst perceptions of elite benefit-capture may cause reluctance to engage with future conservation interventions. The findings reiterate the need to avoid relying on simple assumptions about 'community' in conservation, and particularly the relevance of understanding relationships between groups, in order to understand natural resource use and implications for conservation.

  7. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  8. Nonlinear dynamics in work groups with Bion's basic assumptions.

    Science.gov (United States)

    Dal Forno, Arianna; Merlone, Ugo

    2013-04-01

    According to several authors Bion's contribution has been a landmark in the thought and conceptualization of the unconscious functioning of human beings in groups. We provide a mathematical model of group behavior in which heterogeneous members may behave as if shared to different degrees what in Bion's theory is a common basic assumption. Our formalization combines both individual characteristics and group dynamics. By this formalization we analyze the group dynamics as the result of the individual dynamics of the members and prove that, under some conditions, each individual reproduces the group dynamics in a different scale. In particular, we provide an example in which the chaotic behavior of the group is reflected in each member.

  9. Unconditionally Secure and Universally Composable Commitments from Physical Assumptions

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Scafuro, Alessandra

    2013-01-01

    the usefulness of our compiler by providing two (constant-round) instantiations of ideal straight-line extractable commitment based on (malicious) PUFs [36] and stateless tamper-proof hardware tokens [26], therefore achieving the first unconditionally UC-secure commitment with malicious PUFs and stateless tokens......We present a constant-round unconditional black-box compiler that transforms any ideal (i.e., statistically-hiding and statistically-binding) straight-line extractable commitment scheme, into an extractable and equivocal commitment scheme, therefore yielding to UC-security [9]. We exemplify......, respectively. Our constructions are secure for adversaries creating arbitrarily malicious stateful PUFs/tokens. Previous results with malicious PUFs used either computational assumptions to achieve UC-secure commitments or were unconditionally secure but only in the indistinguishability sense [36]. Similarly...

  10. Assumptions of Customer Knowledge Enablement in the Open Innovation Process

    Directory of Open Access Journals (Sweden)

    Jokubauskienė Raminta

    2017-08-01

    Full Text Available In the scientific literature, open innovation is one of the most effective means to innovate and gain a competitive advantage. In practice, there is a variety of open innovation activities, but, nevertheless, customers stand as the cornerstone in this area, since the customers’ knowledge is one of the most important sources of new knowledge and ideas. Evaluating the context where are the interactions of open innovation and customer knowledge enablement, it is necessary to take into account the importance of customer knowledge management. Increasingly it is highlighted that customers’ knowledge management facilitates the creation of innovations. However, it should be an examination of other factors that influence the open innovation, and, at the same time, customers’ knowledge management. This article presents a theoretical model, which reveals the assumptions of open innovation process and the impact on the firm’s performance.

  11. Dynamic Group Diffie-Hellman Key Exchange under standard assumptions

    International Nuclear Information System (INIS)

    Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David

    2002-01-01

    Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model

  12. Breakdown of Hydrostatic Assumption in Tidal Channel with Scour Holes

    Directory of Open Access Journals (Sweden)

    Chunyan Li

    2016-10-01

    Full Text Available Hydrostatic condition is a common assumption in tidal and subtidal motions in oceans and estuaries.. Theories with this assumption have been largely successful. However, there is no definite criteria separating the hydrostatic from the non-hydrostatic regimes in real applications because real problems often times have multiple scales. With increased refinement of high resolution numerical models encompassing smaller and smaller spatial scales, the need for non-hydrostatic models is increasing. To evaluate the vertical motion over bathymetric changes in tidal channels and assess the validity of the hydrostatic approximation, we conducted observations using a vessel-based acoustic Doppler current profiler (ADCP. Observations were made along a straight channel 18 times over two scour holes of 25 m deep, separated by 330 m, in and out of an otherwise flat 8 m deep tidal pass leading to the Lake Pontchartrain over a time period of 8 hours covering part of the diurnal tidal cycle. Out of the 18 passages over the scour holes, 11 of them showed strong upwelling and downwelling which resulted in the breakdown of hydrostatic condition. The maximum observed vertical velocity was ~ 0.35 m/s, a high value in a tidal channel, and the estimated vertical acceleration reached a high value of 1.76×10-2 m/s2. Analysis demonstrated that the barotropic non-hydrostatic acceleration was dominant. The cause of the non-hydrostatic flow was the that over steep slopes. This demonstrates that in such a system, the bathymetric variation can lead to the breakdown of hydrostatic conditions. Models with hydrostatic restrictions will not be able to correctly capture the dynamics in such a system with significant bathymetric variations particularly during strong tidal currents.

  13. Halo-independent direct detection analyses without mass assumptions

    International Nuclear Information System (INIS)

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew

    2015-01-01

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m χ −σ n plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v min −g-tilde plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v min to nuclear recoil momentum (p R ), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call h-til-tilde(p R ). The entire family of conventional halo-independent g-tilde(v min ) plots for all DM masses are directly found from the single h-tilde(p R ) plot through a simple rescaling of axes. By considering results in h-tilde(p R ) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple g-tilde(v min ) plots for different DM masses. We conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity

  14. Application of Geostatistical Modelling to Study the Exploration Adequacy of Uniaxial Compressive Strength of Intact Rock alongthe Behesht-Abad Tunnel Route

    Directory of Open Access Journals (Sweden)

    Mohammad Doustmohammadi

    2014-12-01

    Full Text Available Uniaxial compressive strength (UCS is one of the most significant factors on the stability of underground excavation projects. Most of the time, this factor can be obtained by exploratory boreholes evaluation. Due to the large distance between exploratory boreholes in the majority of geotechnical projects, the application of geostatistical methods has increased as an estimator of rock mass properties. The present paper ties the estimation of UCS values of intact rock to the distance between boreholes of the Behesht-Abad tunnel in central Iran, using SGEMS geostatistical program. Variography showed that UCS estimation of intact rock using geostatistical methods is reasonable. The model establishment and validation was done after assessment that the model was trustworthy. Cross validation proved the high accuracy (98% and reliability of the model to estimate uniaxial compressive strength. The UCS values were then estimated along the tunnel axis. Moreover, using geostatistical estimation led to better identification of the pros and cons of geotechnical explorations in each location of tunnel route.

  15. Spatial distribution of Munida intermedia and M. sarsi (crustacea: Anomura) on the Galician continental shelf (NW Spain): Application of geostatistical analysis

    Science.gov (United States)

    Freire, J.; González-Gurriarán, E.; Olaso, I.

    1992-12-01

    Geostatistical methodology was used to analyse spatial structure and distribution of the epibenthic crustaceans Munida intermedia and M. sarsi within sets of data which had been collected during three survey cruises carried out on the Galician continental shelf (1983 and 1984). This study investigates the feasibility of using geostatistics for data collected according to traditional methods and of enhancing such methodology. The experimental variograms were calculated (pooled variance minus spatial covariance between samples taken one pair at a time vs. distance) and fitted to a 'spherical' model. The spatial structure model was used to estimate the abundance and distribution of the populations studied using the technique of kriging. The species display spatial structures, which are well marked during high density periods and in some areas (especially northern shelf). Geostatistical analysis allows identification of the density gradients in space as well as the patch grain along the continental shelf of 16-25 km diameter for M. intermedia and 12-20 km for M. sarsi. Patches of both species have a consistent location throughout the different cruises. As in other geographical areas, M. intermedia and M. sarsi usually appear at depths ranging from 200 to 500 m, with the highest densities in the continental shelf area located between Fisterra and Estaca de Bares. Althouh sampling was not originally designed specifically for geostatistics, this assay provides a measurement of spatial covariance, and shows variograms with variable structure depending on population density and geographical area. These ideas are useful in improving the design of future sampling cruises.

  16. Robust spatialization of soil water content at the scale of an agricultural field using geophysical and geostatistical methods

    Science.gov (United States)

    Henine, Hocine; Tournebize, Julien; Laurent, Gourdol; Christophe, Hissler; Cournede, Paul-Henry; Clement, Remi

    2017-04-01

    Research on the Critical Zone (CZ) is a prerequisite for undertaking issues related to ecosystemic services that human societies rely on (nutrient cycles, water supply and quality). However, while the upper part of CZ (vegetation, soil, surface water) is readily accessible, knowledge of the subsurface remains limited, due to the point-scale character of conventional direct observations. While the potential for geophysical methods to overcome this limitation is recognized, the translation of the geophysical information into physical properties or states of interest remains a challenge (e.g. the translation of soil electrical resistivity into soil water content). In this study, we propose a geostatistical framework using the Bayesian Maximum Entropy (BME) approach to assimilate geophysical and point-scale data. We especially focus on the prediction of the spatial distribution of soil water content using (1) TDR point-scale measurements of soil water content, which are considered as accurate data, and (2) soil water content data derived from electrical resistivity measurements, which are uncertain data but spatially dense. We used a synthetic dataset obtained with a vertical 2D domain to evaluate the performance of this geostatistical approach. Spatio-temporal simulations of soil water content were carried out using Hydrus-software for different scenarios: homogeneous or heterogeneous hydraulic conductivity distribution, and continuous or punctual infiltration pattern. From the simulations of soil water content, conceptual soil resistivity models were built using a forward modeling approach and point sampling of water content values, vertically ranged, were done. These two datasets are similar to field measurements of soil electrical resistivity (using electrical resistivity tomography, ERT) and soil water content (using TDR probes) obtained at the Boissy-le-Chatel site, in Orgeval catchment (East of Paris, France). We then integrated them into a specialization

  17. Geostatistical evaluation of integrated marsh management impact on mosquito vectors using before-after-control-impact (BACI design

    Directory of Open Access Journals (Sweden)

    Dempsey Mary E

    2009-06-01

    Full Text Available Abstract Background In many parts of the world, salt marshes play a key ecological role as the interface between the marine and the terrestrial environments. Salt marshes are also exceedingly important for public health as larval habitat for mosquitoes that are vectors of disease and significant biting pests. Although grid ditching and pesticides have been effective in salt marsh mosquito control, marsh degradation and other environmental considerations compel a different approach. Targeted habitat modification and biological control methods known as Open Marsh Water Management (OMWM had been proposed as a viable alternative to marsh-wide physical alterations and chemical control. However, traditional larval sampling techniques may not adequately assess the impacts of marsh management on mosquito larvae. To assess the effectiveness of integrated OMWM and marsh restoration techniques for mosquito control, we analyzed the results of a 5-year OMWM/marsh restoration project to determine changes in mosquito larval production using GIS and geostatistical methods. Methods The following parameters were evaluated using "Before-After-Control-Impact" (BACI design: frequency and geographic extent of larval production, intensity of larval production, changes in larval habitat, and number of larvicide applications. The analyses were performed using Moran's I, Getis-Ord, and Spatial Scan statistics on aggregated before and after data as well as data collected over time. This allowed comparison of control and treatment areas to identify changes attributable to the OMWM/marsh restoration modifications. Results The frequency of finding mosquito larvae in the treatment areas was reduced by 70% resulting in a loss of spatial larval clusters compared to those found in the control areas. This effect was observed directly following OMWM treatment and remained significant throughout the study period. The greatly reduced frequency of finding larvae in the treatment

  18. Geostatistical evaluation of integrated marsh management impact on mosquito vectors using before-after-control-impact (BACI) design.

    Science.gov (United States)

    Rochlin, Ilia; Iwanejko, Tom; Dempsey, Mary E; Ninivaggi, Dominick V

    2009-06-23

    In many parts of the world, salt marshes play a key ecological role as the interface between the marine and the terrestrial environments. Salt marshes are also exceedingly important for public health as larval habitat for mosquitoes that are vectors of disease and significant biting pests. Although grid ditching and pesticides have been effective in salt marsh mosquito control, marsh degradation and other environmental considerations compel a different approach. Targeted habitat modification and biological control methods known as Open Marsh Water Management (OMWM) had been proposed as a viable alternative to marsh-wide physical alterations and chemical control. However, traditional larval sampling techniques may not adequately assess the impacts of marsh management on mosquito larvae. To assess the effectiveness of integrated OMWM and marsh restoration techniques for mosquito control, we analyzed the results of a 5-year OMWM/marsh restoration project to determine changes in mosquito larval production using GIS and geostatistical methods. The following parameters were evaluated using "Before-After-Control-Impact" (BACI) design: frequency and geographic extent of larval production, intensity of larval production, changes in larval habitat, and number of larvicide applications. The analyses were performed using Moran's I, Getis-Ord, and Spatial Scan statistics on aggregated before and after data as well as data collected over time. This allowed comparison of control and treatment areas to identify changes attributable to the OMWM/marsh restoration modifications. The frequency of finding mosquito larvae in the treatment areas was reduced by 70% resulting in a loss of spatial larval clusters compared to those found in the control areas. This effect was observed directly following OMWM treatment and remained significant throughout the study period. The greatly reduced frequency of finding larvae in the treatment areas led to a significant decrease (approximately 44%) in

  19. Geostatistical uncertainty of assessing air quality using high-spatial-resolution lichen data: A health study in the urban area of Sines, Portugal.

    Science.gov (United States)

    Ribeiro, Manuel C; Pinho, P; Branquinho, C; Llop, Esteve; Pereira, Maria J

    2016-08-15

    In most studies correlating health outcomes with air pollution, personal exposure assignments are based on measurements collected at air-quality monitoring stations not coinciding with health data locations. In such cases, interpolators are needed to predict air quality in unsampled locations and to assign personal exposures. Moreover, a measure of the spatial uncertainty of exposures should be incorporated, especially in urban areas where concentrations vary at short distances due to changes in land use and pollution intensity. These studies are limited by the lack of literature comparing exposure uncertainty derived from distinct spatial interpolators. Here, we addressed these issues with two interpolation methods: regression Kriging (RK) and ordinary Kriging (OK). These methods were used to generate air-quality simulations with a geostatistical algorithm. For each method, the geostatistical uncertainty was drawn from generalized linear model (GLM) analysis. We analyzed the association between air quality and birth weight. Personal health data (n=227) and exposure data were collected in Sines (Portugal) during 2007-2010. Because air-quality monitoring stations in the city do not offer high-spatial-resolution measurements (n=1), we used lichen data as an ecological indicator of air quality (n=83). We found no significant difference in the fit of GLMs with any of the geostatistical methods. With RK, however, the models tended to fit better more often and worse less often. Moreover, the geostatistical uncertainty results showed a marginally higher mean and precision with RK. Combined with lichen data and land-use data of high spatial resolution, RK is a more effective geostatistical method for relating health outcomes with air quality in urban areas. This is particularly important in small cities, which generally do not have expensive air-quality monitoring stations with high spatial resolution. Further, alternative ways of linking human activities with their

  20. Regional soil erosion assessment based on a sample survey and geostatistics

    Science.gov (United States)

    Yin, Shuiqing; Zhu, Zhengyuan; Wang, Li; Liu, Baoyuan; Xie, Yun; Wang, Guannan; Li, Yishan

    2018-03-01

    Soil erosion is one of the most significant environmental problems in China. From 2010 to 2012, the fourth national census for soil erosion sampled 32 364 PSUs (Primary Sampling Units, small watersheds) with the areas of 0.2-3 km2. Land use and soil erosion controlling factors including rainfall erosivity, soil erodibility, slope length, slope steepness, biological practice, engineering practice, and tillage practice for the PSUs were surveyed, and the soil loss rate for each land use in the PSUs was estimated using an empirical model, the Chinese Soil Loss Equation (CSLE). Though the information collected from the sample units can be aggregated to estimate soil erosion conditions on a large scale; the problem of estimating soil erosion condition on a regional scale has not been addressed well. The aim of this study is to introduce a new model-based regional soil erosion assessment method combining a sample survey and geostatistics. We compared seven spatial interpolation models based on the bivariate penalized spline over triangulation (BPST) method to generate a regional soil erosion assessment from the PSUs. Shaanxi Province (3116 PSUs) in China was selected for the comparison and assessment as it is one of the areas with the most serious erosion problem. Ten-fold cross-validation based on the PSU data showed the model assisted by the land use, rainfall erosivity factor (R), soil erodibility factor (K), slope steepness factor (S), and slope length factor (L) derived from a 1 : 10 000 topography map is the best one, with the model efficiency coefficient (ME) being 0.75 and the MSE being 55.8 % of that for the model assisted by the land use alone. Among four erosion factors as the covariates, the S factor contributed the most information, followed by K and L factors, and R factor made almost no contribution to the spatial estimation of soil loss. The LS factor derived from 30 or 90 m Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) data

  1. Detection of Local Anomalies in High Resolution Hyperspectral Imagery Using Geostatistical Filtering and Local Spatial Statistics

    Science.gov (United States)

    Goovaerts, P.; Jacquez, G. M.; Marcus, A. W.

    2004-12-01

    Spatial data are periodically collected and processed to monitor, analyze and interpret developments in our changing environment. Remote sensing is a modern way of data collecting and has seen an enormous growth since launching of modern satellites and development of airborne sensors. In particular, the recent availability of high spatial resolution hyperspectral imagery (spatial resolution of less than 5 meters and including data collected over 64 or more bands of electromagnetic radiation for each pixel offers a great potential to significantly enhance environmental mapping and our ability to model spatial systems. High spatial resolution imagery contains a remarkable quantity of information that could be used to analyze spatial breaks (boundaries), areas of similarity (clusters), and spatial autocorrelation (associations) across the landscape. This paper addresses the specific issue of soil disturbance detection, which could indicate the presence of land mines or recent movements of troop and heavy equipment. A challenge presented by soil detection is to retain the measurement of fine-scale features (i.e. mineral soil changes, organic content changes, vegetation disturbance related changes, aspect changes) while still covering proportionally large spatial areas. An additional difficulty is that no ground data might be available for the calibration of spectral signatures, and little might be known about the size of patches of disturbed soils to be detected. This paper describes a new technique for automatic target detection which capitalizes on both spatial and across spectral bands correlation, does not require any a priori information on the target spectral signature but does not allow discrimination between targets. This approach involves successively a multivariate statistical analysis (principal component analysis) of all spectral bands, a geostatistical filtering of noise and regional background in the first principal components using factorial kriging, and

  2. Assessing Landscape-Scale Soil Moisture Distribution Using Auxiliary Sensing Technologies and Multivariate Geostatistics

    Science.gov (United States)

    Landrum, C.; Castrignanò, A.; Mueller, T.; Zourarakis, D.; Zhu, J.

    2013-12-01

    It is important to assess soil moisture to develop strategies to better manage its availability and use. At the landscape scale, soil moisture distribution derives from an integration of hydrologic, pedologic and geomorphic processes that cause soil moisture variability (SMV) to be time, space, and scale-dependent. Traditional methods to assess SMV at this scale are often costly, labor intensive, and invasive, which can lead to inadequate sampling density and spatial coverage. Fusing traditional sampling techniques with georeferenced auxiliary sensing technologies, such as geoelectric sensing and LiDAR, provide an alternative approach. Because geoelectric and LiDAR measurements are sensitive to soil properties and terrain features that affect soil moisture variation, they are often employed as auxiliary measures to support less dense direct sampling. Georeferenced proximal sensing acquires rapid, real-time, high resolution data over large spatial extents that is enriched with spatial, temporal and scale-dependent information. Data fusion becomes important when proximal sensing is used in tandem with more sparse direct sampling. Multicollocated factorial cokriging (MFC) is one technique of multivariate geostatistics to fuse multiple data sources collected at different sampling scales to study the spatial characteristics of environmental properties. With MFC sparse soil observations are supported by more densely sampled auxiliary attributes to produce more consistent spatial descriptions of scale-dependent parameters affecting SMV. This study uses high resolution geoelectric and LiDAR data as auxiliary measures to support direct soil sampling (n=127) over a 40 hectare Central Kentucky (USA) landscape. Shallow and deep apparent electrical resistivity (ERa) were measured using a Veris 3100 in tandem with soil moisture sampling on three separate dates with ascending soil moisture contents ranging from plant wilting point to field capacity. Terrain features were produced

  3. Weak convergence of Jacobian determinants under asymmetric assumptions

    Directory of Open Access Journals (Sweden)

    Teresa Alberico

    2012-05-01

    Full Text Available Let $\\Om$ be a bounded open set in $\\R^2$ sufficiently smooth and $f_k=(u_k,v_k$ and $f=(u,v$ mappings belong to the Sobolev space $W^{1,2}(\\Om,\\R^2$. We prove that if the sequence of Jacobians $J_{f_k}$ converges to a measure $\\mu$ in sense of measures andif one allows different assumptions on the two components of $f_k$ and $f$, e.g.$$u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,2}(\\Om \\qquad \\, v_k \\rightharpoonup v \\;\\;\\mbox{weakly in} \\;\\; W^{1,q}(\\Om$$for some $q\\in(1,2$, then\\begin{equation}\\label{0}d\\mu=J_f\\,dz.\\end{equation}Moreover, we show that this result is optimal in the sense that conclusion fails for $q=1$.On the other hand, we prove that \\eqref{0} remains valid also if one considers the case $q=1$, but it is necessary to require that $u_k$ weakly converges to $u$ in a Zygmund-Sobolev space with a slightly higher degree of regularity than $W^{1,2}(\\Om$ and precisely$$ u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,L^2 \\log^\\alpha L}(\\Om$$for some $\\alpha >1$.    

  4. On some unwarranted tacit assumptions in cognitive neuroscience.

    Science.gov (United States)

    Mausfeld, Rainer

    2012-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input-output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings.

  5. On Some Unwarranted Tacit Assumptions in Cognitive Neuroscience†

    Science.gov (United States)

    Mausfeld, Rainer

    2011-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input–output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings. PMID:22435062

  6. Are Gaussian spectra a viable perceptual assumption in color appearance?

    Science.gov (United States)

    Mizokami, Yoko; Webster, Michael A

    2012-02-01

    Natural illuminant and reflectance spectra can be roughly approximated by a linear model with as few as three basis functions, and this has suggested that the visual system might construct a linear representation of the spectra by estimating the weights of these functions. However, such models do not accommodate nonlinearities in color appearance, such as the Abney effect. Previously, we found that these nonlinearities are qualitatively consistent with a perceptual inference that stimulus spectra are instead roughly Gaussian, with the hue tied to the inferred centroid of the spectrum [J. Vision 6(9), 12 (2006)]. Here, we examined to what extent a Gaussian inference provides a sufficient approximation of natural color signals. Reflectance and illuminant spectra from a wide set of databases were analyzed to test how well the curves could be fit by either a simple Gaussian with three parameters (amplitude, peak wavelength, and standard deviation) versus the first three principal component analysis components of standard linear models. The resulting Gaussian fits were comparable to linear models with the same degrees of freedom, suggesting that the Gaussian model could provide a plausible perceptual assumption about stimulus spectra for a trichromatic visual system. © 2012 Optical Society of America

  7. PKreport: report generation for checking population pharmacokinetic model assumptions

    Directory of Open Access Journals (Sweden)

    Li Jun

    2011-05-01

    Full Text Available Abstract Background Graphics play an important and unique role in population pharmacokinetic (PopPK model building by exploring hidden structure among data before modeling, evaluating model fit, and validating results after modeling. Results The work described in this paper is about a new R package called PKreport, which is able to generate a collection of plots and statistics for testing model assumptions, visualizing data and diagnosing models. The metric system is utilized as the currency for communicating between data sets and the package to generate special-purpose plots. It provides ways to match output from diverse software such as NONMEM, Monolix, R nlme package, etc. The package is implemented with S4 class hierarchy, and offers an efficient way to access the output from NONMEM 7. The final reports take advantage of the web browser as user interface to manage and visualize plots. Conclusions PKreport provides 1 a flexible and efficient R class to store and retrieve NONMEM 7 output, 2 automate plots for users to visualize data and models, 3 automatically generated R scripts that are used to create the plots; 4 an archive-oriented management tool for users to store, retrieve and modify figures, 5 high-quality graphs based on the R packages, lattice and ggplot2. The general architecture, running environment and statistical methods can be readily extended with R class hierarchy. PKreport is free to download at http://cran.r-project.org/web/packages/PKreport/index.html.

  8. Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.

    Science.gov (United States)

    Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A

    2017-04-01

    The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A novel geotechnical/geostatistical approach for exploration and production of natural gas from multiple geologic strata, Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Overbey, W.K. Jr.; Reeves, T.K.; Salamy, S.P.; Locke, C.D.; Johnson, H.R.; Brunk, R.; Hawkins, L. (BDM Engineering Services Co., Morgantown, WV (United States))

    1991-05-01

    This research program has been designed to develop and verify a unique geostatistical approach for finding natural gas resources. The project has been conducted by Beckley College, Inc., and BDM Engineering Services Company (BDMESC) under contract to the US Department of Energy (DOE), Morgantown Energy Technology Center (METC). This section, Volume II, contains a detailed discussion of the methodology used and the geological and production information collected and analyzed for this study. A companion document, Volume 1, provides an overview of the program, technique and results of the study. In combination, Volumes I and II cover the completion of the research undertaken under Phase I of this DOE project, which included the identification of five high-potential sites for natural gas production on the Eccles Quadrangle, Raleigh County, West Virginia. Each of these sites was selected for its excellent potential for gas production from both relatively shallow coalbeds and the deeper, conventional reservoir formations.

  10. Modeling spatial variability of sand-lenses in clay till settings using transition probability and multiple-point geostatistics

    DEFF Research Database (Denmark)

    Kessler, Timo Christian; Nilsson, Bertel; Klint, Knud Erik

    2010-01-01

    of sand-lenses in clay till. Sand-lenses mainly account for horizontal transport and are prioritised in this study. Based on field observations, the distribution has been modeled using two different geostatistical approaches. One method uses a Markov chain model calculating the transition probabilities......The construction of detailed geological models for heterogeneous settings such as clay till is important to describe transport processes, particularly with regard to potential contamination pathways. In low-permeability clay matrices transport is controlled by diffusion, but fractures and sand......-lenses facilitate local advective flow. In glacial settings these geological features occur at diverse extent, geometry, degree of deformation, and spatial distribution. The high level of heterogeneity requires extensive data collection, respectively detailed geological mapping. However, when characterising...

  11. Characterisation of contaminated metals using an advanced statistical toolbox - Geostatistical characterisation of contaminated metals: methodology and illustrations

    International Nuclear Information System (INIS)

    Larsson, Arne; Lidar, Per; Desnoyers, Yvon

    2014-01-01

    Radiological characterisation plays an important role in the process to recycle contaminated or potentially contaminated metals. It is a platform for planning, identification of the extent and nature of contamination, assessing potential risk impacts, cost estimation, radiation protection, management of material arising from decommissioning as well as for the release of the materials as well as the disposal of the generated secondary waste as radioactive waste. Key issues in radiological characterisation are identification of objectives, development of a measurement and sampling strategy (probabilistic, judgmental or a combination thereof), knowledge management, traceability, recording and processing of obtained information. By applying advanced combination of statistical and geostatistical in the concept better performance can be achieved at a lower cost. This paper will describe the benefits with the usage of the available methods in the different stages of the characterisation, treatment and clearance processes aiming for reliable results in line with the data quality objectives. (authors)

  12. Geostatistical Investigations of Displacements on the Basis of Data from the Geodetic Monitoring of a Hydrotechnical Object

    Science.gov (United States)

    Namysłowska-Wilczyńska, Barbara; Wynalek, Janusz

    2017-12-01

    Geostatistical methods make the analysis of measurement data possible. This article presents the problems directed towards the use of geostatistics in spatial analysis of displacements based on geodetic monitoring. Using methods of applied (spatial) statistics, the research deals with interesting and current issues connected to space-time analysis, modeling displacements and deformations, as applied to any large-area objects on which geodetic monitoring is conducted (e.g., water dams, urban areas in the vicinity of deep excavations, areas at a macro-regional scale subject to anthropogenic influences caused by mining, etc.). These problems are very crucial, especially for safety assessment of important hydrotechnical constructions, as well as for modeling and estimating mining damage. Based on the geodetic monitoring data, a substantial basic empirical material was created, comprising many years of research results concerning displacements of controlled points situated on the crown and foreland of an exemplary earth dam, and used to assess the behaviour and safety of the object during its whole operating period. A research method at a macro-regional scale was applied to investigate some phenomena connected with the operation of the analysed big hydrotechnical construction. Applying a semivariogram function enabled the spatial variability analysis of displacements. Isotropic empirical semivariograms were calculated and then, theoretical parameters of analytical functions were determined, which approximated the courses of the mentioned empirical variability measure. Using ordinary (block) kriging at the grid nodes of an elementary spatial grid covering the analysed object, the values of the Z* estimated means of displacements were calculated together with the accompanying assessment of uncertainty estimation - a standard deviation of estimation σk. Raster maps of the distribution of estimated averages Z* and raster maps of deviations of estimation σk (in perspective

  13. Improve ground-level PM2.5concentration mapping using a random forests-based geostatistical approach.

    Science.gov (United States)

    Liu, Ying; Cao, Guofeng; Zhao, Naizhuo; Mulligan, Kevin; Ye, Xinyue

    2018-04-01

    Accurate measurements of ground-level PM 2.5 (particulate matter with aerodynamic diameters equal to or less than 2.5 μm) concentrations are critically important to human and environmental health studies. In this regard, satellite-derived gridded PM 2.5 datasets, particularly those datasets derived from chemical transport models (CTM), have demonstrated unique attractiveness in terms of their geographic and temporal coverage. The CTM-based approaches, however, often yield results with a coarse spatial resolution (typically at 0.1° of spatial resolution) and tend to ignore or simplify the impact of geographic and socioeconomic factors on PM 2.5 concentrations. In this study, with a focus on the long-term PM 2.5 distribution in the contiguous United States, we adopt a random forests-based geostatistical (regression kriging) approach to improve one of the most commonly used satellite-derived, gridded PM 2.5 datasets with a refined spatial resolution (0.01°) and enhanced accuracy. By combining the random forests machine learning method and the kriging family of methods, the geostatistical approach effectively integrates ground-based PM 2.5 measurements and related geographic variables while accounting for the non-linear interactions and the complex spatial dependence. The accuracy and advantages of the proposed approach are demonstrated by comparing the results with existing PM 2.5 datasets. This manuscript also highlights the effectiveness of the geographical variables in long-term PM 2.5 mapping, including brightness of nighttime lights, normalized difference vegetation index and elevation, and discusses the contribution of each of these variables to the spatial distribution of PM 2.5 concentrations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Spatially explicit burden estimates of malaria in Tanzania: bayesian geostatistical modeling of the malaria indicator survey data.

    Directory of Open Access Journals (Sweden)

    Laura Gosoniu

    Full Text Available A national HIV/AIDS and malaria parasitological survey was carried out in Tanzania in 2007-2008. In this study the parasitological data were analyzed: i to identify climatic/environmental, socio-economic and interventions factors associated with child malaria risk and ii to produce a contemporary, high spatial resolution parasitaemia risk map of the country. Bayesian geostatistical models were fitted to assess the association between parasitaemia risk and its determinants. bayesian kriging was employed to predict malaria risk at unsampled locations across Tanzania and to obtain the uncertainty associated with the predictions. Markov chain Monte Carlo (MCMC simulation methods were employed for model fit and prediction. Parasitaemia risk estimates were linked to population data and the number of infected children at province level was calculated. Model validation indicated a high predictive ability of the geostatistical model, with 60.00% of the test locations within the 95% credible interval. The results indicate that older children are significantly more likely to test positive for malaria compared with younger children and living in urban areas and better-off households reduces the risk of infection. However, none of the environmental and climatic proxies or the intervention measures were significantly associated with the risk of parasitaemia. Low levels of malaria prevalence were estimated for Zanzibar island. The population-adjusted prevalence ranges from 0.29% in Kaskazini province (Zanzibar island to 18.65% in Mtwara region. The pattern of predicted malaria risk is similar with the previous maps based on historical data, although the estimates are lower. The predicted maps could be used by decision-makers to allocate resources and target interventions in the regions with highest burden of malaria in order to reduce the disease transmission in the country.

  15. Geostatistics and multivariate analysis as a tool to characterize volcaniclastic deposits: Application to Nevado de Toluca volcano, Mexico

    Science.gov (United States)

    Bellotti, F.; Capra, L.; Sarocchi, D.; D'Antonio, M.

    2010-03-01

    Grain size analysis of volcaniclastic deposits is mainly used to study flow transport and depositional processes, in most cases by comparing some statistical parameters and how they change with distance from the source. In this work the geospatial and multivariate analyses are presented as a strong adaptable geostatistical tool applied to volcaniclastic deposits in order to provide an effective and relatively simple methodology for texture description, deposit discrimination and interpretation of depositional processes. We choose the case of Nevado de Toluca volcano (Mexico) due to existing knowledge of its geological evolution, stratigraphic succession and spatial distribution of volcaniclastic units. Grain size analyses and frequency distribution curves have been carried out to characterize and compare the 28-ka block-and-ash flow deposit associated to a dome destruction episode, and the El Morral debris avalanche deposit originated from the collapse of the south-eastern sector of the volcano. The geostatistical interpolation of sedimentological data allows to realize bidimensional maps draped over the volcano topography, showing the granulometric distribution, sorting and fine material concentration into the whole deposit with respect to topographic changes. In this way, it is possible to analyze a continuous surface of the grain size distribution of volcaniclastic deposits and better understand flow transport processes. The application of multivariate statistic analysis (discriminant function) indicates that this methodology could be useful in discriminating deposits with different origin or different depositional lithofacies within the same deposit. The proposed methodology could be an interesting approach to sustain more classical analysis of volcaniclastic deposits, especially where a clear field classification appears problematic because of a homogeneous texture of the deposits or their scarce and discontinuous outcrops. Our study is an example of the

  16. Interannual Changes in Biomass Affect the Spatial Aggregations of Anchovy and Sardine as Evidenced by Geostatistical and Spatial Indicators.

    Directory of Open Access Journals (Sweden)

    Marco Barra

    Full Text Available Geostatistical techniques were applied and a series of spatial indicators were calculated (occupation, aggregation, location, dispersion, spatial autocorrelation and overlap to characterize the spatial distributions of European anchovy and sardine during summer. Two ecosystems were compared for this purpose, both located in the Mediterranean Sea: the Strait of Sicily (upwelling area and the North Aegean Sea (continental shelf area, influenced by freshwater. Although the biomass of anchovy and sardine presented high interannual variability in both areas, the location of the centres of gravity and the main spatial patches of their populations were very similar between years. The size of the patches representing the dominant part of the abundance (80% was mostly ecosystem- and species-specific. Occupation (area of presence appears to be shaped by the extent of suitable habitats in each ecosystem whereas aggregation patterns (how the populations are distributed within the area of presence were species-specific and related to levels of population biomass. In the upwelling area, both species showed consistently higher occupation values compared to the continental shelf area. Certain characteristics of the spatial distribution of sardine (e.g. spreading area, overlapping with anchovy differed substantially between the two ecosystems. Principal component analysis of geostatistical and spatial indicators revealed that biomass was significantly related to a suite of, rather than single, spatial indicators. At the spatial scale of our study, strong correlations emerged between biomass and the first principal component axis with highly positive loadings for occupation, aggregation and patchiness, independently of species and ecosystem. Overlapping between anchovy and sardine increased with the increase of sardine biomass but decreased with the increase of anchovy. This contrasting pattern was attributed to the location of the respective major patches

  17. Spatially explicit burden estimates of malaria in Tanzania: bayesian geostatistical modeling of the malaria indicator survey data.

    Science.gov (United States)

    Gosoniu, Laura; Msengwa, Amina; Lengeler, Christian; Vounatsou, Penelope

    2012-01-01

    A national HIV/AIDS and malaria parasitological survey was carried out in Tanzania in 2007-2008. In this study the parasitological data were analyzed: i) to identify climatic/environmental, socio-economic and interventions factors associated with child malaria risk and ii) to produce a contemporary, high spatial resolution parasitaemia risk map of the country. Bayesian geostatistical models were fitted to assess the association between parasitaemia risk and its determinants. bayesian kriging was employed to predict malaria risk at unsampled locations across Tanzania and to obtain the uncertainty associated with the predictions. Markov chain Monte Carlo (MCMC) simulation methods were employed for model fit and prediction. Parasitaemia risk estimates were linked to population data and the number of infected children at province level was calculated. Model validation indicated a high predictive ability of the geostatistical model, with 60.00% of the test locations within the 95% credible interval. The results indicate that older children are significantly more likely to test positive for malaria compared with younger children and living in urban areas and better-off households reduces the risk of infection. However, none of the environmental and climatic proxies or the intervention measures were significantly associated with the risk of parasitaemia. Low levels of malaria prevalence were estimated for Zanzibar island. The population-adjusted prevalence ranges from 0.29% in Kaskazini province (Zanzibar island) to 18.65% in Mtwara region. The pattern of predicted malaria risk is similar with the previous maps based on historical data, although the estimates are lower. The predicted maps could be used by decision-makers to allocate resources and target interventions in the regions with highest burden of malaria in order to reduce the disease transmission in the country.

  18. Systematic evaluation of sequential geostatistical resampling within MCMC for posterior sampling of near-surface geophysical inverse problems

    Science.gov (United States)

    Ruggeri, Paolo; Irving, James; Holliger, Klaus

    2015-08-01

    We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.

  19. Geostatistical validation and cross-validation of magnetometric measurements of soil pollution with Potentially Toxic Elements in problematic areas

    Science.gov (United States)

    Fabijańczyk, Piotr; Zawadzki, Jarosław

    2016-04-01

    Field magnetometry is fast method that was previously effectively used to assess the potential soil pollution. One of the most popular devices that are used to measure the soil magnetic susceptibility on the soil surface is a MS2D Bartington. Single reading using MS2D device of soil magnetic susceptibility is low time-consuming but often characterized by considerable errors related to the instrument or environmental and lithogenic factors. In this connection, measured values of soil magnetic susceptibility have to be usually validated using more precise, but also much more expensive, chemical measurements. The goal of this study was to analyze validation methods of magnetometric measurements using chemical analyses of a concentration of elements in soil. Additionally, validation of surface measurements of soil magnetic susceptibility was performed using selected parameters of a distribution of magnetic susceptibility in a soil profile. Validation was performed using selected geostatistical measures of cross-correlation. The geostatistical approach was compared with validation performed using the classic statistics. Measurements were performed at selected areas located in the Upper Silesian Industrial Area in Poland, and in the selected parts of Norway. In these areas soil magnetic susceptibility was measured on the soil surface using a MS2D Bartington device and in the soil profile using MS2C Bartington device. Additionally, soil samples were taken in order to perform chemical measurements. Acknowledgment The research leading to these results has received funding from the Polish-Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009-2014 in the frame of Project IMPACT - Contract No Pol-Nor/199338/45/2013.

  20. Interannual Changes in Biomass Affect the Spatial Aggregations of Anchovy and Sardine as Evidenced by Geostatistical and Spatial Indicators.

    Science.gov (United States)

    Barra, Marco; Petitgas, Pierre; Bonanno, Angelo; Somarakis, Stylianos; Woillez, Mathieu; Machias, Athanasios; Mazzola, Salvatore; Basilone, Gualtiero; Giannoulaki, Marianna

    2015-01-01

    Geostatistical techniques were applied and a series of spatial indicators were calculated (occupation, aggregation, location, dispersion, spatial autocorrelation and overlap) to characterize the spatial distributions of European anchovy and sardine during summer. Two ecosystems were compared for this purpose, both located in the Mediterranean Sea: the Strait of Sicily (upwelling area) and the North Aegean Sea (continental shelf area, influenced by freshwater). Although the biomass of anchovy and sardine presented high interannual variability in both areas, the location of the centres of gravity and the main spatial patches of their populations were very similar between years. The size of the patches representing the dominant part of the abundance (80%) was mostly ecosystem- and species-specific. Occupation (area of presence) appears to be shaped by the extent of suitable habitats in each ecosystem whereas aggregation patterns (how the populations are distributed within the area of presence) were species-specific and related to levels of population biomass. In the upwelling area, both species showed consistently higher occupation values compared to the continental shelf area. Certain characteristics of the spatial distribution of sardine (e.g. spreading area, overlapping with anchovy) differed substantially between the two ecosystems. Principal component analysis of geostatistical and spatial indicators revealed that biomass was significantly related to a suite of, rather than single, spatial indicators. At the spatial scale of our study, strong correlations emerged between biomass and the first principal component axis with highly positive loadings for occupation, aggregation and patchiness, independently of species and ecosystem. Overlapping between anchovy and sardine increased with the increase of sardine biomass but decreased with the increase of anchovy. This contrasting pattern was attributed to the location of the respective major patches combined with the

  1. Providing security assurance in line with national DBT assumptions

    Science.gov (United States)

    Bajramovic, Edita; Gupta, Deeksha

    2017-01-01

    As worldwide energy requirements are increasing simultaneously with climate change and energy security considerations, States are thinking about building nuclear power to fulfill their electricity requirements and decrease their dependence on carbon fuels. New nuclear power plants (NPPs) must have comprehensive cybersecurity measures integrated into their design, structure, and processes. In the absence of effective cybersecurity measures, the impact of nuclear security incidents can be severe. Some of the current nuclear facilities were not specifically designed and constructed to deal with the new threats, including targeted cyberattacks. Thus, newcomer countries must consider the Design Basis Threat (DBT) as one of the security fundamentals during design of physical and cyber protection systems of nuclear facilities. IAEA NSS 10 describes the DBT as "comprehensive description of the motivation, intentions and capabilities of potential adversaries against which protection systems are designed and evaluated". Nowadays, many threat actors, including hacktivists, insider threat, cyber criminals, state and non-state groups (terrorists) pose security risks to nuclear facilities. Threat assumptions are made on a national level. Consequently, threat assessment closely affects the design structures of nuclear facilities. Some of the recent security incidents e.g. Stuxnet worm (Advanced Persistent Threat) and theft of sensitive information in South Korea Nuclear Power Plant (Insider Threat) have shown that these attacks should be considered as the top threat to nuclear facilities. Therefore, the cybersecurity context is essential for secure and safe use of nuclear power. In addition, States should include multiple DBT scenarios in order to protect various target materials, types of facilities, and adversary objectives. Development of a comprehensive DBT is a precondition for the establishment and further improvement of domestic state nuclear-related regulations in the

  2. Geostatistical and Stochastic Study of Flow and Tracer Transport in the Unsaturated Zone at Yucca Mountain

    International Nuclear Information System (INIS)

    Ye, Ming; Pan, Feng; Hu, Xiaolong; Zhu, Jianting

    2007-01-01

    Yucca Mountain has been proposed by the U.S. Department of Energy as the nation's long-term, permanent geologic repository for spent nuclear fuel or high-level radioactive waste. The potential repository would be located in Yucca Mountain's unsaturated zone (UZ), which acts as a critical natural barrier delaying arrival of radionuclides to the water table. Since radionuclide transport in groundwater can pose serious threats to human health and the environment, it is important to understand how much and how fast water and radionuclides travel through the UZ to groundwater. The UZ system consists of multiple hydrogeologic units whose hydraulic and geochemical properties exhibit systematic and random spatial variation, or heterogeneity, at multiple scales. Predictions of radionuclide transport under such complicated conditions are uncertain, and the uncertainty complicates decision making and risk analysis. This project aims at using geostatistical and stochastic methods to assess uncertainty of unsaturated flow and radionuclide transport in the UZ at Yucca Mountain. Focus of this study is parameter uncertainty of hydraulic and transport properties of the UZ. The parametric uncertainty arises since limited parameter measurements are unable to deterministically describe spatial variability of the parameters. In this project, matrix porosity, permeability and sorption coefficient of the reactive tracer (neptunium) of the UZ are treated as random variables. Corresponding propagation of parametric uncertainty is quantitatively measured using mean, variance, 5th and 95th percentiles of simulated state variables (e.g., saturation, capillary pressure, percolation flux, and travel time). These statistics are evaluated using a Monte Carlo method, in which a three-dimensional flow and transport model implemented using the TOUGH2 code is executed with multiple parameter realizations of the random model parameters. The project specifically studies uncertainty of unsaturated flow

  3. Geostatistical Analyses of Soil Organic Carbon Concentrations in Aligodarz Watershed, Lorestan Province

    Directory of Open Access Journals (Sweden)

    Masoud Davari

    2017-01-01

    distribution of SOC were carried out with the geostatistical software GS+ (version 5. 1. Maps were generated by using ILWIS (version 3.3 GIS software. Results and Discussion: The results revealed that the raw SOC data have a long tail towards higher concentrations, whereas that squareroot transformed data can be satisfactorily modelled by a normal distribution. The probability distribution of SOC appeared to be positively skewed and have a positive kurtosis. The square root transformed data showed small skewness and kurtosis, and passed the K–S normality test at a significance level of higher than 0.05. Therefore, the square root transformed data of SOC was used for analyses. The SOC concentration varied from 0.08 to 2.39%, with an arithmetic mean of 0.81% and geometric mean of 0.73%. The coefficient of variation (CV, as an index of overall variability of SOC, was 44.49%. According to the classification system presented by Nielson and Bouma (1985, a variable is moderately varying if the CV is between 10% and 100%. Therefore, the content of SOC in the Aligodarz watershed can be considered to be in moderate variability. The experimental variogram of SOC was fitted by an exponential model. The values of the range, nugget, sill, and nugget/sill ratio of the best-fitted model were 6.80 km, 0.058, 0.133, and 43.6%, respectively. The positive nugget value can be explained by sampling error, short range variability, and unexplained and inherent variability. The nugget/sill ratio of 43.6% showed a moderate spatial dependence of SOC in the study area. The parameters of the exponential smivariogram model were used for kriging method to produce a spatial distribution map of SOC in the study area. The interpolated values ranged between 0.30 and 1.40%. Southern and central parts of this study area have the highest SOC concentrations, while the northern parts have the lowest concentrations of SOC. Kriging results also showed that the major parts of the Aligodarz watershed (about 87% have

  4. 7 CFR 3550.163 - Transfer of security and assumption of indebtedness.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Transfer of security and assumption of indebtedness... § 3550.163 Transfer of security and assumption of indebtedness. (a) General policy. RHS mortgages contain... transferred with an assumption of the indebtedness. If it is in the best interest of the Government, RHS will...

  5. School Principals' Assumptions about Human Nature: Implications for Leadership in Turkey

    Science.gov (United States)

    Sabanci, Ali

    2008-01-01

    This article considers principals' assumptions about human nature in Turkey and the relationship between the assumptions held and the leadership style adopted in schools. The findings show that school principals hold Y-type assumptions and prefer a relationship-oriented style in their relations with assistant principals. However, both principals…

  6. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    Science.gov (United States)

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  7. 75 FR 63380 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2010-10-15

    ...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in November 2010... title IV of the Employee Retirement Income Security Act of 1974. ] PBGC uses the interest assumptions in... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  8. 76 FR 2578 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-01-14

    ...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in February 2011... title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  9. 77 FR 74353 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-12-14

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  10. 78 FR 2881 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-01-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  11. 77 FR 28477 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-05-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in June... title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in the... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  12. 78 FR 62426 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-10-22

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  13. 77 FR 8730 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-02-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... same. The interest assumptions are intended to reflect current conditions in the financial and annuity...

  14. 77 FR 41270 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-07-13

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... same. The interest assumptions are intended to reflect current conditions in the financial and annuity...

  15. 76 FR 41689 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-07-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... same. The interest assumptions are intended to reflect current conditions in the financial and annuity...

  16. 77 FR 68685 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-11-16

    ... regulation for valuation dates in December 2012. The interest assumptions are used for paying benefits under... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  17. 77 FR 22215 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-04-13

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in May... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  18. 78 FR 49682 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-08-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  19. 78 FR 68739 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-11-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in the... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  20. 75 FR 69588 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2010-11-15

    ... interest assumptions under the regulation for valuation dates in December 2010. Interest assumptions are...--for paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...

  1. 77 FR 62433 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-10-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  2. 76 FR 8649 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-02-15

    ...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in March 2011... title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  3. 77 FR 48855 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-08-15

    ... to prescribe interest assumptions under the regulation for valuation dates in September 2012. The... interest assumptions are intended to reflect current conditions in the financial and annuity markets... Assets in Single-Employer Plans (29 CFR part 4044) prescribes interest assumptions for valuing benefits...

  4. Exploring the Influence of Ethnicity, Age, and Trauma on Prisoners' World Assumptions

    Science.gov (United States)

    Gibson, Sandy

    2011-01-01

    In this study, the author explores world assumptions of prisoners, how these assumptions vary by ethnicity and age, and whether trauma history affects world assumptions. A random sample of young and old prisoners, matched for prison location, was drawn from the New Jersey Department of Corrections prison population. Age and ethnicity had…

  5. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    Science.gov (United States)

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  6. Study of the permeability up-scaling by direct filtering of geostatistical model; Etude du changement d'echelle des permeabilites par filtrage direct du modele geostatistique

    Energy Technology Data Exchange (ETDEWEB)

    Zargar, G.

    2005-10-15

    In this thesis, we present a new approach, which consists in directly up-scaling the geostatistical permeability distribution rather than the individual realizations. Practically, filtering techniques based on. the FFT (Fast Fourier Transform), allows us to generate geostatistical images, which sample the up-scaled distributions. In the log normal case, an equivalence hydraulic criterion is proposed, allowing to re-estimate the geometric mean of the permeabilities. In the anisotropic case, the effective geometric mean becomes a tensor which depends on the level of filtering used and it can be calculated by a method of renormalisation. Then, the method was generalized for the categorial model. Numerical tests of the method were set up for isotropic, anisotropic and categorial models, which shows good agreement with theory. (author)

  7. [Multivariate geostatistics and GIS-based approach to study the spatial distribution and sources of heavy metals in agricultural soil in the Pearl River Delta, China].

    Science.gov (United States)

    Cai, Li-mei; Ma, Jin; Zhou, Yong-zhang; Huang, Lan-chun; Dou, Lei; Zhang, Cheng-bo; Fu, Shan-ming

    2008-12-01

    One hundred and eighteen surface soil samples were collected from the Dongguan City, and analyzed for concentration of Cu, Zn, Ni, Cr, Pb, Cd, As, Hg, pH and OM. The spatial distribution and sources of soil heavy metals were studied using multivariate geostatistical methods and GIS technique. The results indicated concentrations of Cu, Zn, Ni, Pb, Cd and Hg were beyond the soil background content in Guangdong province, and especially concentrations of Pb, Cd and Hg were greatly beyond the content. The results of factor analysis group Cu, Zn, Ni, Cr and As in Factor 1, Pb and Hg in Factor 2 and Cd in Factor 3. The spatial maps based on geostatistical analysis show definite association of Factor 1 with the soil parent material, Factor 2 was mainly affected by industries. The spatial distribution of Factor 3 was attributed to anthropogenic influence.

  8. Geostatistical Analysis of Mesoscale Spatial Variability and Error in SeaWiFS and MODIS/Aqua Global Ocean Color Data

    Science.gov (United States)

    Glover, David M.; Doney, Scott C.; Oestreich, William K.; Tullo, Alisdair W.

    2018-01-01

    Mesoscale (10-300 km, weeks to months) physical variability strongly modulates the structure and dynamics of planktonic marine ecosystems via both turbulent advection and environmental impacts upon biological rates. Using structure function analysis (geostatistics), we quantify the mesoscale biological signals within global 13 year SeaWiFS (1998-2010) and 8 year MODIS/Aqua (2003-2010) chlorophyll a ocean color data (Level-3, 9 km resolution). We present geographical distributions, seasonality, and interannual variability of key geostatistical parameters: unresolved variability or noise, resolved variability, and spatial range. Resolved variability is nearly identical for both instruments, indicating that geostatistical techniques isolate a robust measure of biophysical mesoscale variability largely independent of measurement platform. In contrast, unresolved variability in MODIS/Aqua is substantially lower than in SeaWiFS, especially in oligotrophic waters where previous analysis identified a problem for the SeaWiFS instrument likely due to sensor noise characteristics. Both records exhibit a statistically significant relationship between resolved mesoscale variability and the low-pass filtered chlorophyll field horizontal gradient magnitude, consistent with physical stirring acting on large-scale gradient as an important factor supporting observed mesoscale variability. Comparable horizontal length scales for variability are found from tracer-based scaling arguments and geostatistical decorrelation. Regional variations between these length scales may reflect scale dependence of biological mechanisms that also create variability directly at the mesoscale, for example, enhanced net phytoplankton growth in coastal and frontal upwelling and convective mixing regions. Global estimates of mesoscale biophysical variability provide an improved basis for evaluating higher resolution, coupled ecosystem-ocean general circulation models, and data assimilation.

  9. Comparing the applicability of some geostatistical methods to predict the spatial distribution of topsoil Calcium Carbonate in part of farmland of Zanjan Province

    Science.gov (United States)

    Sarmadian, Fereydoon; Keshavarzi, Ali

    2010-05-01

    Most of soils in iran, were located in the arid and semi-arid regions and have high pH (more than 7) and high amount of calcium carbonate and this problem cause to their calcification.In calcareous soils, plant growing and production is difficult. Most part of this problem, in relation to high pH and high concentration of calcium ion that cause to fixation and unavailability of elements which were dependent to pH, especially Phosphorous and some micro nutrients such as Fe, Zn, Mn and Cu. Prediction of soil calcium carbonate in non-sampled areas and mapping the calcium carbonate variability in order to sustainable management of soil fertility is very important.So, this research was done with the aim of evaluation and analyzing spatial variability of topsoil calcium carbonate as an aspect of soil fertility and plant nutrition, comparing geostatistical methods such as kriging and co-kriging and mapping topsoil calcium carbonate. For geostatistical analyzing, sampling was done with stratified random method and soil samples from 0 to 15 cm depth were collected with auger within 23 locations.In co-kriging method, salinity data was used as auxiliary variable. For comparing and evaluation of geostatistical methods, cross validation were used by statistical parameters of RMSE. The results showed that co-kriging method has the highest correlation coefficient and less RMSE and has the higher accuracy than kriging method to prediction of calcium carbonate content in non-sampled areas.

  10. Large to intermediate-scale aquifer heterogeneity in fine-grain dominated alluvial fans (Cenozoic As Pontes Basin, northwestern Spain): insight based on three-dimensional geostatistical reconstruction

    Science.gov (United States)

    Falivene, O.; Cabrera, L.; Sáez, A.

    2007-08-01

    Facies reconstructions are used in hydrogeology to improve the interpretation of aquifer permeability distribution. In the absence of sufficient data to define the heterogeneity due to geological processes, uncertainties in the distribution of aquifer hydrofacies and characteristics may appear. Geometric and geostatistical methods are used to understand and model aquifer hydrofacies distribution, providing models to improve comprehension and development of aquifers. However, these models require some input statistical parameters that can be difficult to infer from the study site. A three-dimensional reconstruction of a kilometer scale fine-grain dominated Cenozoic alluvial fan derived from more than 200 continuously cored, closely spaced, and regularly distributed wells is presented. The facies distributions were reconstructed using a genetic stratigraphic subdivision and a deterministic geostatistical algorithm. The reconstruction is only slightly affected by variations in the geostatistical input parameters because of the high-density data set. Analysis of the reconstruction allowed identification in the proximal to medial alluvial fan zones of several laterally extensive sand bodies with relatively higher permeability; these sand bodies were quantified in terms of volume, mean thickness, maximum area, and maximum equivalent diameter. These quantifications provide trends and geological scenarios for input statistical parameters to model aquifer systems in similar alluvial fan depositional settings.

  11. Comparison of geostatistical interpolation and remote sensing techniques for estimating long-term exposure to ambient PM2.5 concentrations across the continental United States.

    Science.gov (United States)

    Lee, Seung-Jae; Serre, Marc L; van Donkelaar, Aaron; Martin, Randall V; Burnett, Richard T; Jerrett, Michael

    2012-12-01

    A better understanding of the adverse health effects of chronic exposure to fine particulate matter (PM2.5) requires accurate estimates of PM2.5 variation at fine spatial scales. Remote sensing has emerged as an important means of estimating PM2.5 exposures, but relatively few studies have compared remote-sensing estimates to those derived from monitor-based data. We evaluated and compared the predictive capabilities of remote sensing and geostatistical interpolation. We developed a space-time geostatistical kriging model to predict PM2.5 over the continental United States and compared resulting predictions to estimates derived from satellite retrievals. The kriging estimate was more accurate for locations that were about 100 km from a monitoring station, whereas the remote sensing estimate was more accurate for locations that were > 100 km from a monitoring station. Based on this finding, we developed a hybrid map that combines the kriging and satellite-based PM2.5 estimates. We found that for most of the populated areas of the continental United States, geostatistical interpolation produced more accurate estimates than remote sensing. The differences between the estimates resulting from the two methods, however, were relatively small. In areas with extensive monitoring networks, the interpolation may provide more accurate estimates, but in the many areas of the world without such monitoring, remote sensing can provide useful exposure estimates that perform nearly as well.

  12. Geostatistical analysis of the effects of stage and roughness on reach-scale spatial patterns of velocity and turbulence intensity

    Science.gov (United States)

    Legleiter, Carl J.; Phelps, Tracy L.; Wohl, Ellen E.

    2007-01-01

    Although previous research has documented well-organized interactions between the turbulent flow field and an irregular boundary, the spatial variability of turbulent flow characteristics at the reach-scale remains poorly understood. In this paper, we present detailed field measurements of three-dimensional flow velocities and turbulence intensities in a high-gradient, cobble-bed riffle from three discharges; additional data on sediment grain size and bed topography were used to characterize boundary roughness. An acoustic Doppler velocimeter was used to measure velocities along five cross-sections within a 6 m long reach of the North Fork Cache La Poudre River; vertical profiles were also measured along the channel thalweg. We adopted a spatially explicit stochastic hydraulic approach and focused not on coherent flow structures per se but rather time-averaged, reach-scale variability and spatial pattern. Scaling velocities and turbulence intensities by the reach-averaged friction velocity U* accounted for changes in flow depth and enabled comparisons among the three discharges. We quantified the effects of stage and roughness by assessing differences among probability distributions of hydraulic quantities and by examining geostatistical metrics of spatial variability. We computed semivariograms for streamwise and transverse directions and fit parametric models to summarize the spatial structure of each variable at each discharge. Cross-correlograms were also used to describe the local and lagged effects of boundary roughness on flow characteristics. Although the probability distributions yielded some insight, incorporating spatial information revealed important elements of stage-dependent flow structure. The development of secondary currents and flow convergence at higher stages were clearly documented in maps and semivariograms. In general, the spatial structure of the flow field became smoother and more continuous as stage increased and the effects of boundary

  13. Spatial heterogeneity and risk factors for stunting among children under age five in Ethiopia: A Bayesian geo-statistical model.

    Directory of Open Access Journals (Sweden)

    Seifu Hagos

    Full Text Available Understanding the spatial distribution of stunting and underlying factors operating at meso-scale is of paramount importance for intervention designing and implementations. Yet, little is known about the spatial distribution of stunting and some discrepancies are documented on the relative importance of reported risk factors. Therefore, the present study aims at exploring the spatial distribution of stunting at meso- (district scale, and evaluates the effect of spatial dependency on the identification of risk factors and their relative contribution to the occurrence of stunting and severe stunting in a rural area of Ethiopia.A community based cross sectional study was conducted to measure the occurrence of stunting and severe stunting among children aged 0-59 months. Additionally, we collected relevant information on anthropometric measures, dietary habits, parent and child-related demographic and socio-economic status. Latitude and longitude of surveyed households were also recorded. Local Anselin Moran's I was calculated to investigate the spatial variation of stunting prevalence and identify potential local pockets (hotspots of high prevalence. Finally, we employed a Bayesian geo-statistical model, which accounted for spatial dependency structure in the data, to identify potential risk factors for stunting in the study area.Overall, the prevalence of stunting and severe stunting in the district was 43.7% [95%CI: 40.9, 46.4] and 21.3% [95%CI: 19.5, 23.3] respectively. We identified statistically significant clusters of high prevalence of stunting (hotspots in the eastern part of the district and clusters of low prevalence (cold spots in the western. We found out that the inclusion of spatial structure of the data into the Bayesian model has shown to improve the fit for stunting model. The Bayesian geo-statistical model indicated that the risk of stunting increased as the child's age increased (OR 4.74; 95% Bayesian credible interval [BCI]:3

  14. Hierarchical probabilistic regionalization of volcanism for Sengan region in Japan using multivariate statistical techniques and geostatistical interpolation techniques

    International Nuclear Information System (INIS)

    Park, Jinyong; Balasingham, P.; McKenna, Sean Andrew; Kulatilake, Pinnaduwa H. S. W.

    2004-01-01

    Sandia National Laboratories, under contract to Nuclear Waste Management Organization of Japan (NUMO), is performing research on regional classification of given sites in Japan with respect to potential volcanic disruption using multivariate statistics and geo-statistical interpolation techniques. This report provides results obtained for hierarchical probabilistic regionalization of volcanism for the Sengan region in Japan by applying multivariate statistical techniques and geostatistical interpolation techniques on the geologic data provided by NUMO. A workshop report produced in September 2003 by Sandia National Laboratories (Arnold et al., 2003) on volcanism lists a set of most important geologic variables as well as some secondary information related to volcanism. Geologic data extracted for the Sengan region in Japan from the data provided by NUMO revealed that data are not available at the same locations for all the important geologic variables. In other words, the geologic variable vectors were found to be incomplete spatially. However, it is necessary to have complete geologic variable vectors to perform multivariate statistical analyses. As a first step towards constructing complete geologic variable vectors, the Universal Transverse Mercator (UTM) zone 54 projected coordinate system and a 1 km square regular grid system were selected. The data available for each geologic variable on a geographic coordinate system were transferred to the aforementioned grid system. Also the recorded data on volcanic activity for Sengan region were produced on the same grid system. Each geologic variable map was compared with the recorded volcanic activity map to determine the geologic variables that are most important for volcanism. In the regionalized classification procedure, this step is known as the variable selection step. The following variables were determined as most important for volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater

  15. Spatial Distribution and Mobility Assessment of Carcinogenic Heavy Metals in Soil Profiles Using Geostatistics and Random Forest, Boruta Algorithm

    Directory of Open Access Journals (Sweden)

    Asma Shaheen

    2018-03-01

    Full Text Available In third world countries, industries mainly cause environmental contamination due to lack of environmental policies or oversight during their implementation. The Sheikhupura industrial zone, which includes industries such as tanneries, leather, chemical, textiles, and colour and dyes, contributes massive amounts of untreated effluents that are released directly into drains and used for the irrigation of crops and vegetables. This practice causes not only soil contamination with an excessive amount of heavy metals, but is also considered a source of toxicity in the food chain, i.e., bioaccumulation in plants and ultimately in human body organs. The objective of this research study was to assess the spatial distribution of the heavy metals chromium (Cr, cadmium (Cd, and lead (Pb, at three depths of soil using geostatistics and the selection of significant contributing variables to soil contamination using the Random Forest (RF function of the Boruta Algorithm. A total of 60 sampling locations were selected in the study area to collect soil samples (180 samples at three depths (0–15 cm, 15–30 cm, and 60–90 cm. The soil samples were analysed for their physico-chemical properties, i.e., soil saturation, electrical conductivity (EC, organic matter (OM, pH, phosphorus (P, potassium (K, and Cr, Cd, and Pb using standard laboratory procedures. The data were analysed with comprehensive statistics and geostatistical techniques. The correlation coefficient matrix between the heavy metals and the physico-chemical properties revealed that electrical conductivity (EC had a significant (p ≤ 0.05 negative correlation with Cr, Cd, and Pb. The RF function of the Boruta Algorithm employed soil depth as a classifier and ranked the significant soil contamination parameters (Cr, Cd, Pb, EC, and P in relation to depth. The mobility factor indicated the leachate percentage of heavy metals at different vertical depths of soil. The spatial distribution pattern of

  16. Geostatistical interpolation of daily rainfall at catchment scale: the use of several variogram models in the Ourthe and Ambleve catchments, Belgium

    Directory of Open Access Journals (Sweden)

    S. Ly

    2011-07-01

    Full Text Available Spatial interpolation of precipitation data is of great importance for hydrological modelling. Geostatistical methods (kriging are widely applied in spatial interpolation from point measurement to continuous surfaces. The first step in kriging computation is the semi-variogram modelling which usually used only one variogram model for all-moment data. The objective of this paper was to develop different algorithms of spatial interpolation for daily rainfall on 1 km2 regular grids in the catchment area and to compare the results of geostatistical and deterministic approaches. This study leaned on 30-yr daily rainfall data of 70 raingages in the hilly landscape of the Ourthe and Ambleve catchments in Belgium (2908 km2. This area lies between 35 and 693 m in elevation and consists of river networks, which are tributaries of the Meuse River. For geostatistical algorithms, seven semi-variogram models (logarithmic, power, exponential, Gaussian, rational quadratic, spherical and penta-spherical were fitted to daily sample semi-variogram on a daily basis. These seven variogram models were also adopted to avoid negative interpolated rainfall. The elevation, extracted from a digital elevation model, was incorporated into multivariate geostatistics. Seven validation raingages and cross validation were used to compare the interpolation performance of these algorithms applied to different densities of raingages. We found that between the seven variogram models used, the Gaussian model was the most frequently best fit. Using seven variogram models can avoid negative daily rainfall in ordinary kriging. The negative estimates of kriging were observed for convective more than stratiform rain. The performance of the different methods varied slightly according to the density of raingages, particularly between 8 and 70 raingages but it was much different for interpolation using 4 raingages. Spatial interpolation with the geostatistical and

  17. Merging parallel tempering with sequential geostatistical resampling for improved posterior exploration of high-dimensional subsurface categorical fields

    Science.gov (United States)

    Laloy, Eric; Linde, Niklas; Jacques, Diederik; Mariethoz, Grégoire

    2016-04-01

    The sequential geostatistical resampling (SGR) algorithm is a Markov chain Monte Carlo (MCMC) scheme for sampling from possibly non-Gaussian, complex spatially-distributed prior models such as geologic facies or categorical fields. In this work, we highlight the limits of standard SGR for posterior inference of high-dimensional categorical fields with realistically complex likelihood landscapes and benchmark a parallel tempering implementation (PT-SGR). Our proposed PT-SGR approach is demonstrated using synthetic (error corrupted) data from steady-state flow and transport experiments in categorical 7575- and 10,000-dimensional 2D conductivity fields. In both case studies, every SGR trial gets trapped in a local optima while PT-SGR maintains an higher diversity in the sampled model states. The advantage of PT-SGR is most apparent in an inverse transport problem where the posterior distribution is made bimodal by construction. PT-SGR then converges towards the appropriate data misfit much faster than SGR and partly recovers the two modes. In contrast, for the same computational resources SGR does not fit the data to the appropriate error level and hardly produces a locally optimal solution that looks visually similar to one of the two reference modes. Although PT-SGR clearly surpasses SGR in performance, our results also indicate that using a small number (16-24) of temperatures (and thus parallel cores) may not permit complete sampling of the posterior distribution by PT-SGR within a reasonable computational time (less than 1-2 weeks).

  18. Spatial variability of soil pH based on GIS combined with geostatistics in Panzhihua tobacco area

    International Nuclear Information System (INIS)

    Du Wei; Wang Changquan; Li Bing; Li Qiquan; Du Qian; Hu Jianxin; Liu Chaoke

    2012-01-01

    GIS and geostatistics were utilized to study the spatial variability of soil pH in Panzhihua tobacco area. Results showed that pH values in this area ranged from 4.5 to 8.3, especially 5.5 to 6.5, and in few areas were lower than 5.0 or higher than 7.0 which can meet the need of high-quality tobacco production. The best fitting model of variogram was exponential model with the nugget/sill of soil pH in 13.61% indicating strong spatial correlation. The change process was 5.40 km and the coefficient of determination was 0.491. The spatial variability of soil pH was mainly caused by structural factors such as cane, topography and soil type. The soil pH in Panzhihua tobacco area also showed a increasing trend of northwest to southeast trend. The pH of some areas in Caochang, Gonghe and Yumen were lower, and in Dalongtan were slightly higher. (authors)

  19. Groundwater levels time series sensitivity to pluviometry and air temperature: a geostatistical approach to Sfax region, Tunisia.

    Science.gov (United States)

    Triki, Ibtissem; Trabelsi, Nadia; Hentati, Imen; Zairi, Moncef

    2014-03-01

    In this paper, the pattern of groundwater level fluctuations is investigated by statistical techniques for 24 monitoring wells located in an unconfined coastal aquifer in Sfax (Tunisia) for a time period from 1997 to 2006. Firstly, a geostatistical study is performed to characterize the temporal behaviors of data sets in terms of variograms and to make predictions about the value of the groundwater level at unsampled times. Secondly, multivariate statistical methods, i.e., principal component analysis (PCA) and cluster analysis (CA) of time series of groundwater levels are used to classify groundwater hydrographs regard to identical fluctuation pattern. Three groundwater groups (A, B, and C) were identified. In group "A," water level decreases continuously throughout the study periods with rapid annual cyclic variation, whereas in group "B," the water level contains much less high-frequency variation. The wells of group "C" represents a steady and gradual increase of groundwater levels caused by the aquifer artificial recharge. Furthermore, a cross-correlation analysis is used to investigate the aquifer response to local rainfall and temperature records. The result revealed that the temperature is more affecting the variation of the groundwater level of group A wells than the rainfall. However, the second and the third groups are less affected by rainfall or temperature.

  20. Survey and Zoning of Soil Physical and Chemical Properties Using Geostatistical Methods in GIS (Case Study: Miankangi Region in Sistan

    Directory of Open Access Journals (Sweden)

    M. Hashemi

    2017-02-01

    Full Text Available Introduction: In order to provide a database, it is essential having access to accurate information on soil spatial variation for soil sustainable management such as proper application of fertilizers. Spatial variations in soil properties are common but it is important for understanding these changes, particularly in agricultural lands for careful planning and land management. Materials and Methods: To this end, in winter 1391, 189 undisturbed soil samples (0-30 cm depth in a regular lattice with a spacing of 500 m were gathered from the surface of Miankangi land, Sistan plain, and their physical and chemical properties were studied. The land area of the region is about 4,500 hectares; the average elevation of studied area is 489.2 meters above sea level with different land uses. Soil texture was measured by the hydrometer methods (11, Also EC and pH (39, calcium carbonate equivalent (37 and the saturation percentage of soils were determined. Kriging, Co-Kriging, Inverse Distance Weighting and Local Polynomial Interpolation techniques were evaluated to produce a soil characteristics map of the study area zoning and to select the best geostatistical methods. Cross-validation techniques and Root Mean Square Error (RMSE were used. Results and Discussion: Normalized test results showed that all of the soil properties except calcium carbonate and soil clay content had normal distribution. In addition, the results of correlation test showed that the soil saturation percentage was positively correlated with silt content (r=0.43 and p

  1. Impact of additional small-scale survey data on the geostatistical analyses of demersal fish species in the North Sea

    Directory of Open Access Journals (Sweden)

    Vanessa Stelzenmüller

    2005-12-01

    Full Text Available Geostatistical tools have been used to study the impact of additional small scale catch data (star survey design on the spatial analysis of fish, regarding different biological groups of dab, Limanda limanda and whiting, Merlangius merlangus. A standard survey carried out in January (2001-2003 in a meso-scale area in the German Bight was modified by additional small-scale sampling in 2002 and 2003. Adopting the star survey reduced the small-scale variability for medium-sized and male dab, as indicated by lower values of the nugget effect and an increased resolution of the spatial dependency. For whiting no reduction in the small scale variability could be detected; a significant difference in the spatial structuring was only found for two different size groups of whiting. Uncertainty of mean catches of dab and whiting was reduced in 2002, while in 2003 the effect of the star survey was less pronounced due to the high local density of the nearby stations. We conclude that the star survey design can be an inexpensive and effective procedure — depending on the species studied and/or the positioning of the nearby stations — when a minimised small-scale variability and a reduction of uncertainty in mean biomass of fish are the focus of interest.

  2. Geostatistics and Geographic Information Systems to Study the Spatial Distribution of Grapholita molesta (Busck) (Lepidoptera: Tortricidae) in Peach Fields.

    Science.gov (United States)

    Duarte, F; Calvo, M V; Borges, A; Scatoni, I B

    2015-08-01

    The oriental fruit moth, Grapholita molesta (Busck), is the most serious pest in peach, and several insecticide applications are required to reduce crop damage to acceptable levels. Geostatistics and Geographic Information Systems (GIS) are employed to measure the range of spatial correlation of G. molesta in order to define the optimum sampling distance for performing spatial analysis and to determine the current distribution of the pest in peach orchards of southern Uruguay. From 2007 to 2010, 135 pheromone traps per season were installed and georeferenced in peach orchards distributed over 50,000 ha. Male adult captures were recorded weekly from September to April. Structural analysis of the captures was performed, yielding 14 semivariograms for the accumulated captures analyzed by generation and growing season. Two sets of maps were constructed to describe the pest distribution. Nine significant models were obtained in the 14 evaluated periods. The range estimated for the correlation was from 908 to 6884 m. Three hot spots of high population level and some areas with comparatively low populations were constant over the 3-year period, while there is a greater variation in the size of the population in different generations and years in other areas.

  3. Detection of terrain indices related to soil salinity and mapping salt-affected soils using remote sensing and geostatistical techniques.

    Science.gov (United States)

    Triki Fourati, Hela; Bouaziz, Moncef; Benzina, Mourad; Bouaziz, Samir

    2017-04-01

    Traditional surveying methods of soil properties over landscapes are dramatically cost and time-consuming. Thus, remote sensing is a proper choice for monitoring environmental problem. This research aims to study the effect of environmental factors on soil salinity and to map the spatial distribution of this salinity over the southern east part of Tunisia by means of remote sensing and geostatistical techniques. For this purpose, we used Advanced Spaceborne Thermal Emission and Reflection Radiometer data to depict geomorphological parameters: elevation, slope, plan curvature (PLC), profile curvature (PRC), and aspect. Pearson correlation between these parameters and soil electrical conductivity (EC soil ) showed that mainly slope and elevation affect the concentration of salt in soil. Moreover, spectral analysis illustrated the high potential of short-wave infrared (SWIR) bands to identify saline soils. To map soil salinity in southern Tunisia, ordinary kriging (OK), minimum distance (MD) classification, and simple regression (SR) were used. The findings showed that ordinary kriging technique provides the most reliable performances to identify and classify saline soils over the study area with a root mean square error of 1.83 and mean error of 0.018.

  4. Thin sand modeling based on geostatistic, uncertainty and risk analysis in Zuata Principal field, Orinoco oil belt

    Energy Technology Data Exchange (ETDEWEB)

    Cardona, W.; Aranaga, R.; Siu, P.; Perez, L. [PDVSA Petroleos de Venezuela SA, Caracas (Venezuela, Bolivarian Republic of)

    2009-07-01

    The geological modelling of the Zuata Principal field in Venezuela, particularly the Junin Block 2 belonging to Orinoco oil belt, is a challenge because of the presence of thin sand bodies in an unexploited zone. This paper presented the results obtained from a horizontal well that contacted 96 per cent of pay count sand in the field. Geostatistical modelling and sensibility analysis were used for planning the well. The model was generated by processing and interpreting information from production and exploratory fishbones. Information provided by nearby wildcat wells suggested that the proposed area was not prospective. However, information provided by several exploratory fishbones offered some possibility of draining additional reserves. From available information, facies models and uncertainty analysis were made to statistically determine the best option, notably to drill additional stratwells to obtain a more accurate characterization or apply the already obtained model for drilling a production well in the investigated area. The study showed that geological uncertainty does not only depend on how much information is available, but also on how this information can be processed and interpreted. Decision analysis provides a rational basis for dealing with risk and uncertainties. 4 refs., 7 tabs., 7 figs., 1 appendix.

  5. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    Science.gov (United States)

    Gething, Peter W; Patil, Anand P; Hay, Simon I

    2010-04-01

    Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  6. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    Directory of Open Access Journals (Sweden)

    Peter W Gething

    2010-04-01

    Full Text Available Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  7. The effect of training image and secondary data integration with multiple-point geostatistics in groundwater modelling

    DEFF Research Database (Denmark)

    He, Xin; Sonnenborg, Torben; Jørgensen, F.

    2014-01-01

    Multiple-point geostatistical simulation (MPS) has recently become popular in stochastic hydrogeology, primarily because of its capability to derive multivariate distributions from a training image (TI). However, its application in three-dimensional (3-D) simulations has been constrained by the d...... is a convenient and efficient way of integrating secondary data such as 3-D airborne electromagnetic data (SkyTEM), but over-conditioning has to be avoided....... by the difficulty of constructing a 3-D TI. The object-based unconditional simulation program TiGenerator may be a useful tool in this regard; yet the applicability of such parametric training images has not been documented in detail. Another issue in MPS is the integration of multiple geophysical data. The proper...... way to retrieve and incorporate information from high-resolution geophysical data is still under discussion. In this study, MPS simulation was applied to different scenarios regarding the TI and soft conditioning. By comparing their output from simulations of groundwater flow and probabilistic capture...

  8. Being Explicit about Underlying Values, Assumptions and Views when Designing for Children in the IDC Community

    DEFF Research Database (Denmark)

    Skovbjerg, Helle Marie; Bekker, Tilde; Barendregt, Wolmet

    2016-01-01

    In this full-day workshop we want to discuss how the IDC community can make underlying assumptions, values and views regarding children and childhood in making design decisions more explicit. What assumptions do IDC designers and researchers make, and how can they be supported in reflecting......, and intends to share different approaches for uncovering and reflecting on values, assumptions and views about children and childhood in design....

  9. Dialogic or Dialectic? The Significance of Ontological Assumptions in Research on Educational Dialogue

    Science.gov (United States)

    Wegerif, Rupert

    2008-01-01

    This article explores the relationship between ontological assumptions and studies of educational dialogue through a focus on Bakhtin's "dialogic". The term dialogic is frequently appropriated to a modernist framework of assumptions, in particular the neo-Vygotskian or sociocultural tradition. However, Vygotsky's theory of education is dialectic,…

  10. Making Sense out of Sex Stereotypes in Advertising: A Feminist Analysis of Assumptions.

    Science.gov (United States)

    Ferrante, Karlene

    Sexism and racism in advertising have been well documented, but feminist research aimed at social change must go beyond existing content analyses to ask how advertising is created. Analysis of the "mirror assumption" (advertising reflects society) and the "gender assumption" (advertising speaks in a male voice to female…

  11. Assumptions about Ecological Scale and Nature Knowing Best Hiding in Environmental Decisions

    Science.gov (United States)

    R. Bruce Hull; David P. Robertson; David Richert; Erin Seekamp; Gregory J. Buhyoff

    2002-01-01

    Assumptions about nature are embedded in people's preferences for environmental policy and management. The people we interviewed justified preservationist policies using four assumptions about nature knowing best: nature is balanced, evolution is progressive, technology is suspect, and the Creation is perfect. They justified interventionist policies using three...

  12. Recognising the Effects of Costing Assumptions in Educational Business Simulation Games

    Science.gov (United States)

    Eckardt, Gordon; Selen, Willem; Wynder, Monte

    2015-01-01

    Business simulations are a powerful way to provide experiential learning that is focussed, controlled, and concentrated. Inherent in any simulation, however, are numerous assumptions that determine feedback, and hence the lessons learnt. In this conceptual paper we describe some common cost assumptions that are implicit in simulation design and…

  13. Food-based dietary guidelines : some assumptions tested for the Netherlands

    NARCIS (Netherlands)

    Löwik, M.R.H.; Hulshof, K.F.A.M.; Brussaard, J.H.

    1999-01-01

    Recently, the concept of food-based dietary guidelines has been introduced by WHO and FAO. For this concept, several assumptions were necessary. The validity and potential consequences of some of these assumptions are discussed in this paper on the basis of the Dutch National Food Consumption

  14. 76 FR 63836 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-10-14

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in...-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The interest... regulation are the same. The interest assumptions are intended to reflect current conditions in the financial...

  15. 77 FR 2015 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-01-13

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... terminating single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974... the financial and annuity markets. Assumptions under the benefit payments regulation are updated...

  16. 78 FR 22192 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-04-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in May... paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...

  17. 78 FR 28490 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-05-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in June... paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...

  18. 76 FR 27889 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-05-13

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in June...--for paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...

  19. 76 FR 70639 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-11-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The... financial and annuity markets. Assumptions under the benefit payments regulation are updated monthly. This...

  20. 76 FR 50413 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-08-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in...-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The interest... regulation are the same. The interest assumptions are intended to reflect current conditions in the financial...

  1. Sensitivity of the OMI ozone profile retrieval (OMO3PR) to a priori assumptions

    NARCIS (Netherlands)

    Mielonen, T.; De Haan, J.F.; Veefkind, J.P.

    2014-01-01

    We have assessed the sensitivity of the operational OMI ozone profile retrieval (OMO3PR) algorithm to a number of a priori assumptions. We studied the effect of stray light correction, surface albedo assumptions and a priori ozone profiles on the retrieved ozone profile. Then, we studied how to

  2. The Role of Policy Assumptions in Validating High-stakes Testing Programs.

    Science.gov (United States)

    Kane, Michael

    L. Cronbach has made the point that for validity arguments to be convincing to diverse audiences, they need to be based on assumptions that are credible to these audiences. The interpretations and uses of high stakes test scores rely on a number of policy assumptions about what should be taught in schools, and more specifically, about the content…

  3. The Arundel Assumption And Revision Of Some Large-Scale Maps ...

    African Journals Online (AJOL)

    The rather common practice of stating or using the Arundel Assumption without reference to appropriate mapping standards (except mention of its use for graphical plotting) is a major cause of inaccuracies in map revision. This paper describes an investigation to ascertain the applicability of the Assumption to the revision of ...

  4. Implicit Assumptions in Special Education Policy: Promoting Full Inclusion for Students with Learning Disabilities

    Science.gov (United States)

    Kirby, Moira

    2017-01-01

    Introduction: Everyday millions of students in the United States receive special education services. Special education is an institution shaped by societal norms. Inherent in these norms are implicit assumptions regarding disability and the nature of special education services. The two dominant implicit assumptions evident in the American…

  5. A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States

    Science.gov (United States)

    Ryff, Luiz Carlos

    1996-01-01

    A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.

  6. 7 CFR 765.402 - Transfer of security and loan assumption on same rates and terms.

    Science.gov (United States)

    2010-01-01

    ... of Security and Assumption of Debt § 765.402 Transfer of security and loan assumption on same rates... comprised solely of family members of the borrower assumes the debt along with the original borrower; (c) An individual with an ownership interest in the borrower entity buys the entire ownership interest of the other...

  7. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Science.gov (United States)

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  8. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  9. Multivariate geostatistical analysis and source identification of heavy metals in the sediment of Poyang Lake in China.

    Science.gov (United States)

    Dai, Lijun; Wang, Lingqing; Li, Lianfang; Liang, Tao; Zhang, Yongyong; Ma, Chuanxin; Xing, Baoshan

    2018-04-15

    Heavy metals in lake sediment have become a great concern because their remobilization has frequently occurred under hydrodynamic disturbance in shallow lakes. In this study, heavy metals (Cr, Cu, Cd, Pb, and Zn) concentrations in the surface and core sediments of the largest freshwater lake in China, Poyang Lake, were investigated. Geostatistical prediction maps of heavy metals distribution in the surface sediment were completed as well as further data mining. Based on the prediction maps, the ranges of Cr, Cu, Cd, Pb, and Zn concentrations in the surface sediments of the entire lake were 96.2-175.2, 38.3-127.6, 0.2-2.3, 22.5-77.4, and 72.3-254.4mg/kg, respectively. A self-organizing map (SOM) was applied to find the inner element relation of heavy metals in the sediment cores. K-means clustering of the self-organizing map was also completed to define the Euclidian distance of heavy metals in the sediment cores. The geoaccumulation index (I geo ) for Poyang Lake indicated a varying degree of heavy metal contamination in the surface sediment, especially for Cu. The heavy metal contamination in the sediment profiles had similar pollution levels as those of surface sediment, except for Cd. Correlation matrix mapping and principal component analysis (PCA) were used to support the idea that Cr, Pb, and Zn may be mainly derived from both lithogenic and human activities, such as atmospheric and river inflow transportation, whereas Cu and Cd may be mainly contributed from anthropogenic sources, such as mining activities and fertilizer application. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Comparison of ArcGIS and SAS Geostatistical Analyst to Estimate Population-Weighted Monthly Temperature for US Counties.

    Science.gov (United States)

    Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang

    Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R 2 , mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R 2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.

  11. Characterizing groundwater quality ranks for drinking purposes in Sylhet district, Bangladesh, using entropy method, spatial autocorrelation index, and geostatistics.

    Science.gov (United States)

    Islam, Abu Reza Md Towfiqul; Ahmed, Nasir; Bodrud-Doza, Md; Chu, Ronghao

    2017-12-01

    Drinking water is susceptible to the poor quality of contaminated water affecting the health of humans. Thus, it is an essential study to investigate factors affecting groundwater quality and its suitability for drinking uses. In this paper, the entropy theory, multivariate statistics, spatial autocorrelation index, and geostatistics are applied to characterize groundwater quality and its spatial variability in the Sylhet district of Bangladesh. A total of 91samples have been collected from wells (e.g., shallow, intermediate, and deep tube wells at 15-300-m depth) from the study area. The results show that NO 3 - , then SO 4 2- , and As are the most contributed parameters influencing the groundwater quality according to the entropy theory. The principal component analysis (PCA) and correlation coefficient also confirm the results of the entropy theory. However, Na + has the highest spatial autocorrelation and the most entropy, thus affecting the groundwater quality. Based on the entropy-weighted water quality index (EWQI) and groundwater quality index (GWQI) classifications, it is observed that 60.45 and 53.86% of water samples are classified as having an excellent to good qualities, while the remaining samples vary from medium to extremely poor quality domains for drinking purposes. Furthermore, the EWQI classification provides the more reasonable results than GWQIs due to its simplicity, accuracy, and ignoring of artificial weight. A Gaussian semivariogram model has been chosen to the best fit model, and groundwater quality indices have a weak spatial dependence, suggesting that both geogenic and anthropogenic factors play a pivotal role in spatial heterogeneity of groundwater quality oscillations.

  12. Integration of DAS (distributed acoustic sensing) vertical seismic profile and geostatistically modeled lithology data to characterize an enhanced geothermal system.

    Science.gov (United States)

    Cronin, S. P.; Trainor Guitton, W.; Team, P.; Pare, A.; Jreij, S.; Powers, H.

    2017-12-01

    In March 2016, a 4-week field data acquisition took place at Brady's Natural Lab (BNL), an enhanced geothermal system (EGS) in Fallan, NV. During these 4 weeks, a vibe truck executed 6,633 sweeps, recorded by nodal seismometers, horizontal distributed acoustic sensing (DAS) cable, and 400 meters of vertical DAS cable. DAS provides lower signal to noise ratio than traditional geophones but better spatial resolution. The analysis of DAS VSP included Fourier transform, and filtering to remove all up-going energy. Thus, allowing for accurate first arrival picking. We present an example of the Gradual Deformation Method (GDM) using DAS VSP and lithological data to produce a distribution of valid velocity models of BNL. GDM generates continuous perturbations of prior model realizations seeking the best match to the data (i.e. minimize the misfit). Prior model realizations honoring the lithological data were created using sequential Gaussian simulation, a commonly used noniterative geostatistical method. Unlike least-squares-based methods of inversion, GDM readily incorporates a priori information, such as a variogram calculated from well-based lithology information. Additionally, by producing a distribution of models, as opposed to one optimal model, GDM allows for uncertainty quantification. This project aims at assessing the integrated technologies ability to monitor changes in the water table (possibly to one meter resolution) by exploiting the dependence of seismic wave velocities on water saturation of the subsurface. This project, which was funded in part by the National Science Foundation, is a part of the PoroTomo project, funded by a grant from the U.S. Department of Energy.

  13. Verification of the geostatistical inference code INFERENS, Version 1.1, and demonstration using data from Finnsjoen

    International Nuclear Information System (INIS)

    Geier, J.

    1993-06-01

    This report describes preliminary verification and demonstration of the geostatistical inference code, INFERENS Version 1.1. This code performs regularization of packer test conductivities, and iterative generalized least-squares estimation (IGLSE) of nested covariance models and spatial trends for the regularized data. Cross-validation is used to assess the quality of the estimated models in terms of statistics for the kriging errors. The code includes a capability to generate synthetic datasets for a given configuration of packer tests; this capability can be used for verification exercises and numerical experiments to aid in the design of packer testing programs. The report presents the results of a set of verification test cases. The test cases were designed to test the ability of INFERENS 1.1 to estimate the parameters of a variety of covariance models, with or without trends. This was done using synthetic datasets. This report also describes an application of INFERENS 1.1 to the dataset from the Finnsjoen site. The results are roughly similar to those obtained previously by Norman (1992a) using INFERENS 1.0, for the comparable cases. The actual numerical results are different, which may be due to changes in the fitting algorithms, and differences in how the lag pairs are divided into lag classes. The demonstrations confirm the result previously obtained by Norman, that the fitted horizontally isotropic models are less good, in terms of their cross-validation statistics, than the corresponding isotropic models. The use of nested covariance models is demonstrated to give visually improved fits to the sample semivariograms, at both short and long lag distances. However, despite the good match to the semivariograms, the nested models obtained are not better than the simple models, in terms of cross-validation statistics

  14. Using the Direct Sampling Multiple-Point Geostatistical Method for Filling Gaps in Landsat 7 ETM+ SLC-off Imagery

    KAUST Repository

    Yin, Gaohong

    2016-05-01

    Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.

  15. Spatial analysis and risk mapping of soil-transmitted helminth infections in Brazil, using Bayesian geostatistical models.

    Science.gov (United States)

    Scholte, Ronaldo G C; Schur, Nadine; Bavia, Maria E; Carvalho, Edgar M; Chammartin, Frédérique; Utzinger, Jürg; Vounatsou, Penelope

    2013-11-01

    Soil-transmitted helminths (Ascaris lumbricoides, Trichuris trichiura and hookworm) negatively impact the health and wellbeing of hundreds of millions of people, particularly in tropical and subtropical countries, including Brazil. Reliable maps of the spatial distribution and estimates of the number of infected people are required for the control and eventual elimination of soil-transmitted helminthiasis. We used advanced Bayesian geostatistical modelling, coupled with geographical information systems and remote sensing to visualize the distribution of the three soil-transmitted helminth species in Brazil. Remotely sensed climatic and environmental data, along with socioeconomic variables from readily available databases were employed as predictors. Our models provided mean prevalence estimates for A. lumbricoides, T. trichiura and hookworm of 15.6%, 10.1% and 2.5%, respectively. By considering infection risk and population numbers at the unit of the municipality, we estimate that 29.7 million Brazilians are infected with A. lumbricoides, 19.2 million with T. trichiura and 4.7 million with hookworm. Our model-based maps identified important risk factors related to the transmission of soiltransmitted helminths and confirm that environmental variables are closely associated with indices of poverty. Our smoothed risk maps, including uncertainty, highlight areas where soil-transmitted helminthiasis control interventions are most urgently required, namely in the North and along most of the coastal areas of Brazil. We believe that our predictive risk maps are useful for disease control managers for prioritising control interventions and for providing a tool for more efficient surveillance-response mechanisms.

  16. Improved hydrological model parametrization for climate change impact assessment under data scarcity - The potential of field monitoring techniques and geostatistics.

    Science.gov (United States)

    Meyer, Swen; Blaschek, Michael; Duttmann, Rainer; Ludwig, Ralf

    2016-02-01

    According to current climate projections, Mediterranean countries are at high risk for an even pronounced susceptibility to changes in the hydrological budget and extremes. These changes are expected to have severe direct impacts on the management of water resources, agricultural productivity and drinking water supply. Current projections of future hydrological change, based on regional climate model results and subsequent hydrological modeling schemes, are very uncertain and poorly validated. The Rio Mannu di San Sperate Basin, located in Sardinia, Italy, is one test site of the CLIMB project. The Water Simulation Model (WaSiM) was set up to model current and future hydrological conditions. The availability of measured meteorological and hydrological data is poor as it is common for many Mediterranean catchments. In this study we conducted a soil sampling campaign in the Rio Mannu catchment. We tested different deterministic and hybrid geostatistical interpolation methods on soil textures and tested the performance of the applied models. We calculated a new soil texture map based on the best prediction method. The soil model in WaSiM was set up with the improved new soil information. The simulation results were compared to standard soil parametrization. WaSiMs was validated with spatial evapotranspiration rates using the triangle method (Jiang and Islam, 1999). WaSiM was driven with the meteorological forcing taken from 4 different ENSEMBLES climate projections for a reference (1971-2000) and a future (2041-2070) times series. The climate change impact was assessed based on differences between reference and future time series. The simulated results show a reduction of all hydrological quantities in the future in the spring season. Furthermore simulation results reveal an earlier onset of dry conditions in the catchment. We show that a solid soil model setup based on short-term field measurements can improve long-term modeling results, which is especially important

  17. Quantifying the exposure of humans and the environment to oil pollution in the Niger Delta using advanced geostatistical techniques.

    Science.gov (United States)

    Obida, Christopher B; Alan Blackburn, G; Duncan Whyatt, J; Semple, Kirk T

    2018-02-01

    The Niger Delta is one of the largest oil producing regions of the world. Large numbers and volumes of oil spills have been reported in this region. What has not been quantified is the putative exposure of humans and/or the environment to this hydrocarbon pollution. In this novel study, advanced geostatistical techniques were applied to an extensive database of oil spill incidents from 2007 to 2015. The aims were to (i) identify and analyse spill hotspots along the oil pipeline network and (ii) estimate the exposure of the hydrocarbon pollution to the human population and the environment within the Niger Delta. Over the study period almost 90millionlitres of oil were released. Approximately 29% of the human population living in proximity to the pipeline network has been potentially exposed to oil contamination, of which 565,000 people live within high or very high spill intensity sectors. Over 1000km 2 of land has been contaminated by oil pollution, with broadleaved forest, mangroves and agricultural land the most heavily impacted land cover types. Proximity to the coast, roads and cities are the strongest spatial factors contributing to spill occurrence, which largely determine the accessibility of sites for pipeline sabotage and oil theft. Overall, the findings demonstrate the high levels of environmental and human exposure to hydrocarbon pollutants in the Niger Delta. These results provide evidence with which to spatially target interventions to reduce future spill incidents and mitigate the impacts of previous spills on human communities and ecosystem health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A geostatistical analysis of the association between armed conflicts and Plasmodium falciparum malaria in Africa, 1997-2010.

    Science.gov (United States)

    Sedda, Luigi; Qi, Qiuyin; Tatem, Andrew J

    2015-12-16

    The absence of conflict in a country has been cited as a crucial factor affecting the operational feasibility of achieving malaria control and elimination, yet mixed evidence exists on the influence that conflicts have had on malaria transmission. Over the past two decades, Africa has seen substantial numbers of armed conflicts of varying length and scale, creating conditions that can disrupt control efforts and impact malaria transmission. However, very few studies have quantitatively assessed the associations between conflicts and malaria transmission, particularly in a consistent way across multiple countries. In this analysis an explicit geostatistical, autoregressive, mixed model is employed to quantitatively assess the association between conflicts and variations in Plasmodium falciparum parasite prevalence across a 13-year period in sub-Saharan Africa. Analyses of geolocated, malaria prevalence survey variations against armed conflict data in general showed a wide, but short-lived impact of conflict events geographically. The number of countries with decreased P. falciparum parasite prevalence (17) is larger than the number of countries with increased transmission (12), and notably, some of the countries with the highest transmission pre-conflict were still found with lower transmission post-conflict. For four countries, there were no significant changes in parasite prevalence. Finally, distance from conflicts, duration of conflicts, violence of conflict, and number of conflicts were significant components in the model explaining the changes in P. falciparum parasite rate. The results suggest that the maintenance of intervention coverage and provision of healthcare in conflict situations to protect vulnerable populations can maintain gains in even the most difficult of circumstances, and that conflict does not represent a substantial barrier to elimination goals.

  19. Geostatistical mapping of leakance in a regional aquitard, Oak Ridges Moraine area, Ontario, Canada

    Science.gov (United States)

    Desbarats, A. J.; Hinton, M. J.; Logan, C. E.; Sharpe, D. R.

    2001-01-01

    The Newmarket Till forms a regionally extensive aquitard separating two major aquifer systems in the Greater Toronto area, Canada. The till is incised, and sometimes eroded entirely, by a network of sand- and gravel-filled channels forming productive aquifers and, locally, high-conductivity windows between aquifer systems. Leakage through the till may also be substantial in places. This study investigates the spatial variability of aquitard leakance in order to assess the relative importance of recharge processes to the lower aquifers. With a large database derived from water-well records and containing both hard and soft information, the Sequential Indicator Simulation method is used to generate maps of aquitard thickness and window probability. These can be used for targeting channel aquifers and for identifying potential areas of recharge to the lower aquifers. Conductivities are modeled from sparse data assuming that their correlation range is much smaller than the grid spacing. Block-scale leakances are obtained by upscaling nodal values based on simulated conductivity and thickness fields. Under the "aquifer-flow'' assumption, upscaling is performed by arithmetic spatial averaging. Histograms and maps of upscaled leakances show that heterogeneities associated with aquitard windows have the largest effect on regional groundwater flow patterns. Résumé. La moraine glaciaire de Newmarket constitue un imperméable d'extension régionale séparant deux systèmes aquifères dans la région du Grand Toronto (Canada). La moraine est entaillée, et parfois entièrement érodée, par un réseau de chenaux comblés de sables et de graviers formant des aquifères productifs et, localement, des «fenêtres», zones à forte conductivité hydraulique reliant les systèmes aquifères. Une drainance au travers de la moraine peut également être significative par endroits. Cette étude s'intéresse à la variabilité spatiale de la drainance au travers de l

  20. Evaluating the effect of sampling and spatial correlation on ground-water travel time uncertainty coupling geostatistical, stochastic, and first order, second moment methods

    International Nuclear Information System (INIS)

    Andrews, R.W.; LaVenue, A.M.; McNeish, J.A.

    1989-01-01

    Ground-water travel time predictions at potential high-level waste repositories are subject to a degree of uncertainty due to the scale of averaging incorporated in conceptual models of the ground-water flow regime as well as the lack of data on the spatial variability of the hydrogeologic parameters. The present study describes the effect of limited observations of a spatially correlated permeability field on the predicted ground-water travel time uncertainty. Varying permeability correlation lengths have been used to investigate the importance of this geostatistical property on the tails of the travel time distribution. This study uses both geostatistical and differential analysis techniques. Following the generation of a spatially correlated permeability field which is considered reality, semivariogram analyses are performed upon small random subsets of the generated field to determine the geostatistical properties of the field represented by the observations. Kriging is then employed to generate a kriged permeability field and the corresponding standard deviation of the estimated field conditioned by the limited observations. Using both the real and kriged fields, the ground-water flow regime is simulated and ground-water travel paths and travel times are determined for various starting points. These results are used to define the ground-water travel time uncertainty due to path variability. The variance of the ground-water travel time along particular paths due to the variance of the permeability field estimated using kriging is then calculated using the first order, second moment method. The uncertainties in predicted travel time due to path and parameter uncertainties are then combined into a single distribution

  1. Spatial distribution of soil organic carbon and total nitrogen based on GIS and geostatistics in a small watershed in a hilly area of northern China.

    Science.gov (United States)

    Peng, Gao; Bing, Wang; Guangpo, Geng; Guangcan, Zhang

    2013-01-01

    The spatial variability of soil organic carbon (SOC) and total nitrogen (STN) levels is important in both global carbon-nitrogen cycle and climate change research. There has been little research on the spatial distribution of SOC and STN at the watershed scale based on geographic information systems (GIS) and geostatistics. Ninety-seven soil samples taken at depths of 0-20 cm were collected during October 2010 and 2011 from the Matiyu small watershed (4.2 km(2)) of a hilly area in Shandong Province, northern China. The impacts of different land use types, elevation, vegetation coverage and other factors on SOC and STN spatial distributions were examined using GIS and a geostatistical method, regression-kriging. The results show that the concentration variations of SOC and STN in the Matiyu small watershed were moderate variation based on the mean, median, minimum and maximum, and the coefficients of variation (CV). Residual values of SOC and STN had moderate spatial autocorrelations, and the Nugget/Sill were 0.2% and 0.1%, respectively. Distribution maps of regression-kriging revealed that both SOC and STN concentrations in the Matiyu watershed decreased from southeast to northwest. This result was similar to the watershed DEM trend and significantly correlated with land use type, elevation and aspect. SOC and STN predictions with the regression-kriging method were more accurate than those obtained using ordinary kriging. This research indicates that geostatistical characteristics of SOC and STN concentrations in the watershed were closely related to both land-use type and spatial topographic structure and that regression-kriging is suitable for investigating the spatial distributions of SOC and STN in the complex topography of the watershed.

  2. Dose rate estimates and spatial interpolation maps of outdoor gamma dose rate with geostatistical methods; A case study from Artvin, Turkey

    International Nuclear Information System (INIS)

    Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkin, Halim; Çevik, Uğur

    2015-01-01

    In this study, compliance of geostatistical estimation methods is compared to ensure investigation and imaging natural Fon radiation using the minimum number of data. Artvin province, which has a quite hilly terrain and wide variety of soil and located in the north–east of Turkey, is selected as the study area. Outdoor gamma dose rate (OGDR), which is an important determinant of environmental radioactivity level, is measured in 204 stations. Spatial structure of OGDR is determined by anisotropic, isotropic and residual variograms. Ordinary kriging (OK) and universal kriging (UK) interpolation estimations were calculated with the help of model parameters obtained from these variograms. In OK, although calculations are made based on positions of points where samples are taken, in the UK technique, general soil groups and altitude values directly affecting OGDR are included in the calculations. When two methods are evaluated based on their performances, it has been determined that UK model (r = 0.88, p < 0.001) gives quite better results than OK model (r = 0.64, p < 0.001). In addition, as a result of the maps created at the end of the study, it was illustrated that local changes are better reflected by UK method compared to OK method and its error variance is found to be lower. - Highlights: • The spatial dispersion of gamma dose rates in Artvin, which possesses one of the roughest lands in Turkey were studied. • The performance of different Geostatistic methods (OK and UK methods) for dispersion of gamma dose rates were compared. • Estimation values were calculated for non-sampling points by using the geostatistical model, the results were mapped. • The general radiological structure was determined in much less time with lower costs compared to experimental methods. • When theoretical methods are evaluated, it was obtained that UK gives more descriptive results compared to OK.

  3. Assumptions for well-known statistical techniques: Disturbing explanations for why they are seldom checked

    Directory of Open Access Journals (Sweden)

    Rink eHoekstra

    2012-05-01

    Full Text Available A valid interpretation of most statistical techniques requires that the criteria for one or more assumptions are met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another, more disquieting, explanation would be that violations of assumptions are hardly checked for in the first place. In this article a study is presented on whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. They were asked to analyze the data as they would their own data, for which often used and well-known techniques like the t-procedure, ANOVA and regression were required. It was found that they hardly ever checked for violations of assumptions. Interviews afterwards revealed that mainly lack of knowledge and nonchalance, rather than more rational reasons like being aware of the robustness of a technique or unfamiliarity with an alternative, seem to account for this behavior. These data suggest that merely encouraging people to check for violations of assumptions will not lead them to do so, and that the use of statistics is opportunistic.

  4. The crux of the method: assumptions in ordinary least squares and logistic regression.

    Science.gov (United States)

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  5. A new scenario framework for climate change research: the concept of shared climate policy assumptions

    NARCIS (Netherlands)

    Kriegler, E.; Edmonds, J.; Hallegatte, S.; Ebi, K.L.; Kram, T.; Riahi, K.; Winkler, J.; van Vuuren, Detlef

    2014-01-01

    The new scenario framework facilitates the coupling of multiple socioeconomic reference pathways with climate model products using the representative concentration pathways. This will allow for improved assessment of climate impacts, adaptation and mitigation. Assumptions about climate policy play a

  6. Washington International Renewable Energy Conference (WIREC) 2008 Pledges. Methodology and Assumptions Summary

    Energy Technology Data Exchange (ETDEWEB)

    Babiuch, Bill [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bilello, Daniel E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cowlin, Shannon C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wise, Alison [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2008-08-01

    This report describes the methodology and assumptions used by NREL in quantifying the potential CO2 reductions resulting from more than 140 governments, international organizations, and private-sector representatives pledging to advance the uptake of renewable energy.

  7. A note on the translation of conceptual data models into description logics: disjointness and covering assumptions

    CSIR Research Space (South Africa)

    Casini, G

    2012-10-01

    Full Text Available ). In this paper we propose two simple procedures to assist modelers with integrating these assumptions into their models, thereby allowing for a more complete translation into DLs....

  8. Tests of data quality, scaling assumptions, and reliability of the Danish SF-36

    DEFF Research Database (Denmark)

    Bjorner, J B; Damsgaard, M T; Watt, T

    1998-01-01

    We used general population data (n = 4084) to examine data completeness, response consistency, tests of scaling assumptions, and reliability of the Danish SF-36 Health Survey. We compared traditional multitrait scaling analyses to analyses using polychoric correlations and Spearman correlations...

  9. Who needs the assumption of opportunistic behavior? Transaction cost economics does not!

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2000-01-01

    The assumption of opportunistic behavior, familiar from transaction cost economics, has been and remains highly controversial. But opportunistic behavior, albeit undoubtedly an extremely important form of motivation, is not a necessary condition for the contractual problems studied by transaction...

  10. Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations.

    Science.gov (United States)

    Heckman, James

    1997-01-01

    Considers the use of instrumental variables to estimate effects of treatments on treated and randomly selected groups. Concludes that instrumental variable methods are extremely sensitive to assumptions about how people process information. (SK)

  11. Assumptions for Including Organic Food in the Gastronomic Offering of Istrian Agritourism

    Directory of Open Access Journals (Sweden)

    Pavlo Ružić

    2009-01-01

    Full Text Available The authors of this research analyze assumptions to including organic food in the gastronomic offering of Istrians agritourism. They assume that gastronomic offering of Istrian agritourism that includes organic food would be more acceptable and competitive on the tourist market. The authors analyzed their assumptions using surveys conducted in 2007 and 2008 on tourists in Istra to learn whether they prefer organic food, does organic food match modern tourist trends, and are they willing to pay more for it.

  12. The Causes and Consequences of Differing Pensions Accounting Assumptions in UK Pension Schemes

    OpenAIRE

    Thomas, Gareth

    2006-01-01

    Anecdotal evidence and a number of empirical studies from the US suggest that the providers of corporate pension schemes may manipulate the actuarial assumptions used to estimate the value of the scheme. By manipulating the pension scheme assumptions corporations can reduce their required contribution to the scheme in order to manage their perceived performance. A sample of 92 FTSE 100 companies during the period 2002-2004 was taken and the link between corporate financial constraint and pe...

  13. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    Science.gov (United States)

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  14. A comparison of geostatistically-based inverse techniques for use in performance assessment analyses at the WIPP site results from the Test Case No. 1

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Gallegos, D.P.

    1992-01-01

    The groundwater flow pathway in the Culebra Dolomite aquifer at the Waste Isolation Pilot Plant (WIPP) has been identified a potentially important pathway for radionuclide migration to the accessible environment. Consequently, uncertainties in the models used to describe flow and transport in the Culebra need to be addressed. A ''Geostatistics Test Problem'' is being developed to evaluate a number of inverse techniques that may be used for Dow calculations in the WIPP performance assessment (PA). The Test Problem is actually a series of test cases, each being developed as a highly complex synthetic data sct; the intent is for the ensemble of these data sets span the range of possible conceptual models of groundwater now at the WIPP site. This paper describes the results from Test Case No. 1. Of the five techniques compared, those based on the linearized form of the groundwater flow equation exhibited less bias and less spread in their GWTT distribution functions; the semi-analytical method had the least bias. While the results are not sufficient to make generalizations about which techniques may be better suited for the WIPP PA (only one test case has been exercised), analyses of the data from this test case provides some indication about the relative importance of other aspects of the flow modeling (besides inverse method or geostatistical approach) in PA. Then ancillary analyses examine the effect of gridding an the effect of boundary conditions on the groundwater travel time estimates

  15. A geostatistical methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer.

    Science.gov (United States)

    Júnez-Ferreira, H E; Herrera, G S

    2013-04-01

    This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.

  16. Stochastic simulation of time-series models combined with geostatistics to predict water-table scenarios in a Guarani Aquifer System outcrop area, Brazil

    Science.gov (United States)

    Manzione, Rodrigo L.; Wendland, Edson; Tanikawa, Diego H.

    2012-11-01

    Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.

  17. Geostatistical parameters estimation using well test data; Determination de parametres geostatistiques par l'utilisation des donnees d'essai de puits

    Energy Technology Data Exchange (ETDEWEB)

    Gautier, Y.; Noetinger, B. [Institut Francais du Petrole (IFP), 92 - Rueil-Malmaison (France)

    2004-07-01

    In this paper we describe a new method to obtain estimations of the geostatistical parameters (GPs) such as the correlation length, l{sub c} and the permeability variance, {sigma}{sub ln}{sup 2} from well test data. In practical studies, the GPs are estimated using geological and petrophysical data, but often, these data are too scarce to give precise results. The proposed method uses the Bayesian inversion theory, in conjunction with a fast evaluation of well tests that implies up-scaling techniques. The method was tested using synthetic well-test data performed on some training images, and estimations of the underlying correlation length, l{sub c} and permeability variance, {sigma}{sub ln}{sup 2} were recovered. These estimations give a correct order of magnitude of the actual values, but as noticed in similar methods, the uncertainties are high. Once the GPs are estimated, other well established techniques can be used to get well-test matched reservoir images consistent with the geostatistical model. We will see that excellent well test data are needed, and that the method could be improved using multiple well test data. (authors)

  18. Improving a spatial rainfall product using multiple-point geostatistical simulations and its effect on a national hydrological model.

    Science.gov (United States)

    Oriani, F.; Stisen, S.

    2016-12-01

    Rainfall amount is one of the most sensitive inputs to distributed hydrological models. Its spatial representation is of primary importance to correctly study the uncertainty of basin recharge and its propagation to the surface and underground circulation. We consider here the 10-km-grid rainfall product provided by the Danish Meteorological Institute as input to the National Water Resources Model of Denmark. Due to a drastic reduction in the rain gauge network in recent years (from approximately 500 stations in the period 1996-2006, to 250 in the period 2007-2014), the grid rainfall product, based on the interpolation of these data, is much less reliable. Consequently, the related hydrological model shows a significantly lower prediction power. To give a better estimation of spatial rainfall at the grid points far from ground measurements, we use the direct sampling technique (DS) [1], belonging to the family of multiple-point geostatistics. DS, already applied to rainfall and spatial variable estimation [2, 3], simulates a grid value by sampling a training data set where a similar data neighborhood occurs. In this way, complex statistical relations are preserved by generating similar spatial patterns to the ones found in the training data set. Using the reliable grid product from the period 1996-2006 as training data set, we first test the technique by simulating part of this data set, then we apply the technique to the grid product of the period 2007-2014, and subsequently analyzing the uncertainty propagation to the hydrological model. We show that DS can improve the reliability of the rainfall product by generating more realistic rainfall patterns, with a significant repercussion on the hydrological model. The reduction of rain gauge networks is a global phenomenon which has huge implications for hydrological model performance and the uncertainty assessment of water resources. Therefore, the presented methodology can potentially be used in many regions where

  19. Groundwater quality assessment of the shallow aquifers west of the Nile Delta (Egypt) using multivariate statistical and geostatistical techniques

    Science.gov (United States)

    Masoud, Alaa A.

    2014-07-01

    Extensive urban, agricultural and industrial expansions on the western fringe of the Nile Delta of Egypt have exerted much load on the water needs and lead to groundwater quality deterioration. Documenting the spatial variation of the groundwater quality and their controlling factors is vital to ensure sustainable water management and safe use. A comprehensive dataset of 451 shallow groundwater samples were collected in 2011 and 2012. On-site field measurements of the total dissolved solids (TDS), electric conductivity (EC), pH, temperature, as well as lab-based ionic composition of the major and trace components were performed. Groundwater types were derived and the suitability for irrigation use was evaluated. Multivariate statistical techniques of factor analysis and K-means clustering were integrated with the geostatistical semi-variogram modeling for evaluating the spatial hydrochemical variations and the driving factors as well as for hydrochemical pattern recognition. Most hydrochemical parameters showed very wide ranges; TDS (201-24,400 mg/l), pH (6.72-8.65), Na+ (28.30-7774 mg/l), and Cl- (7-12,186 mg/l) suggesting complex hydrochemical processes of multiple sources. TDS violated the limit (1200 mg/l) of the Egyptian standards for drinking water quality in many localities. Extreme concentrations of Fe2+, Mn2+, Zn2+, Cu2+, Ni2+, are mostly related to their natural content in the water-bearing sediments and/or to contamination from industrial leakage. Very high nitrate concentrations exceeding the permissible limit (50 mg/l) were potentially maximized toward hydrologic discharge zones and related to wastewater leakage. Three main water types; NaCl (29%), Na2SO4 (26%), and NaHCO3 (20%), formed 75% of the groundwater dominated in the saline depressions, sloping sides of the coastal ridges of the depressions, and in the cultivated/newly reclaimed lands intensely covered by irrigation canals, respectively. Water suitability for irrigation use clarified that the

  20. Information for decision making from imperfect national data: tracking major changes in health care use in Kenya using geostatistics

    Directory of Open Access Journals (Sweden)

    Hay Simon I

    2007-12-01

    Full Text Available Abstract Background Most Ministries of Health across Africa invest substantial resources in some form of health management information system (HMIS to coordinate the routine acquisition and compilation of monthly treatment and attendance records from health facilities nationwide. Despite the expense of these systems, poor data coverage means they are rarely, if ever, used to generate reliable evidence for decision makers. One critical weakness across Africa is the current lack of capacity to effectively monitor patterns of service use through time so that the impacts of changes in policy or service delivery can be evaluated. Here, we present a new approach that, for the first time, allows national changes in health service use during a time of major health policy change to be tracked reliably using imperfect data from a national HMIS. Methods Monthly attendance records were obtained from the Kenyan HMIS for 1 271 government-run and 402 faith-based outpatient facilities nationwide between 1996 and 2004. A space-time geostatistical model was used to compensate for the large proportion of missing records caused by non-reporting health facilities, allowing robust estimation of monthly and annual use of services by outpatients during this period. Results We were able to reconstruct robust time series of mean levels of outpatient utilisation of health facilities at the national level and for all six major provinces in Kenya. These plots revealed reliably for the first time a period of steady nationwide decline in the use of health facilities in Kenya between 1996 and 2002, followed by a dramatic increase from 2003. This pattern was consistent across different causes of attendance and was observed independently in each province. Conclusion The methodological approach presented can compensate for missing records in health information systems to provide robust estimates of national patterns of outpatient service use. This represents the first such use of

  1. Soil Moisture Mapping in an Arid Area Using a Land Unit Area (LUA Sampling Approach and Geostatistical Interpolation Techniques

    Directory of Open Access Journals (Sweden)

    Saeid Gharechelou

    2016-03-01

    Full Text Available Soil moisture (SM plays a key role in many environmental processes and has a high spatial and temporal variability. Collecting sample SM data through field surveys (e.g., for validation of remote sensing-derived products can be very expensive and time consuming if a study area is large, and producing accurate SM maps from the sample point data is a difficult task as well. In this study, geospatial processing techniques are used to combine several geo-environmental layers relevant to SM (soil, geology, rainfall, land cover, etc. into a land unit area (LUA map, which delineates regions with relatively homogeneous geological/geomorphological, land use/land cover, and climate characteristics. This LUA map is used to guide the collection of sample SM data in the field, and the field data is finally spatially interpolated to create a wall-to-wall map of SM in the study area (Garmsar, Iran. The main goal of this research is to create a SM map in an arid area, using a land unit area (LUA approach to obtain the most appropriate sample locations for collecting SM field data. Several environmental GIS layers, which have an impact on SM, were combined to generate a LUA map, and then field surveying was done in each class of the LUA map. A SM map was produced based on LUA, remote sensing data indexes, and spatial interpolation of the field survey sample data. The several interpolation methods (inverse distance weighting, kriging, and co-kriging were evaluated for generating SM maps from the sample data. The produced maps were compared to each other and validated using ground truth data. The results show that the LUA approach is a reasonable method to create the homogenous field to introduce a representative sample for field soil surveying. The geostatistical SM map achieved adequate accuracy; however, trend analysis and distribution of the soil sample point locations within the LUA types should be further investigated to achieve even better results. Co

  2. Fine-Resolution Precipitation Mapping in a Mountainous Watershed: Geostatistical Downscaling of TRMM Products Based on Environmental Variables

    Directory of Open Access Journals (Sweden)

    Yueyuan Zhang

    2018-01-01

    Full Text Available Accurate precipitation data at a high spatial resolution are essential for hydrological, meteorological, and ecological research at regional scales. This study presented a geostatistical downscaling-calibration procedure to derive the high spatial resolution maps of precipitation over a mountainous watershed affected by a monsoon climate. Based on the relationships between precipitation and other environmental variables, such as the Normalized Difference Vegetation Index (NDVI and digital elevation model (DEM, a regression model with a residual correction method was applied to downscale the Tropical Rainfall Measuring Mission (TRMM 3B43 product from coarse resolution (25 km to fine resolution (1 km. Two methods, geographical difference analysis (GDA and geographical ratio analysis (GRA, were used to calibrate the downscaled TRMM precipitation data. Monthly 1 km precipitation data were obtained by disaggregating 1 km annual downscaled and calibrated precipitation data using monthly fractions derived from original TRMM data. The downscaled precipitation datasets were validated against ground observations measured by rain gauges. According to the comparison of different regression models and residual interpolation methods, a geographically-weighted regression kriging (GWRK method was accepted to conduct the downscaling of TRMM data. The downscaled TRMM precipitation data obtained using GWRK described the spatial patterns of precipitation reasonably well at a spatial resolution of 1 km with more detailed information when compared with the original TRMM precipitation. The results of validation indicated that the GRA method provided results with higher accuracy than that of the GDA method. The final annual and monthly downscaled precipitation not only had significant improvement in spatial resolution, but also agreed well with data from the validation rain gauge stations (i.e., R2 = 0.72, RMSE = 161.0 mm, MAE = 127.5 mm, and Bias = 0.050 for annual

  3. Risk mapping of clonorchiasis in the People's Republic of China: A systematic review and Bayesian geostatistical analysis.

    Directory of Open Access Journals (Sweden)

    Ying-Si Lai

    2017-03-01

    Full Text Available Clonorchiasis, one of the most important food-borne trematodiases, affects more than 12 million people in the People's Republic of China (P.R. China. Spatially explicit risk estimates of Clonorchis sinensis infection are needed in order to target control interventions.Georeferenced survey data pertaining to infection prevalence of C. sinensis in P.R. China from 2000 onwards were obtained via a systematic review in PubMed, ISI Web of Science, Chinese National Knowledge Internet, and Wanfang Data from January 1, 2000 until January 10, 2016, with no restriction of language or study design. Additional disease data were provided by the National Institute of Parasitic Diseases, Chinese Center for Diseases Control and Prevention in Shanghai. Environmental and socioeconomic proxies were extracted from remote-sensing and other data sources. Bayesian variable selection was carried out to identify the most important predictors of C. sinensis risk. Geostatistical models were applied to quantify the association between infection risk and the predictors of the disease, and to predict the risk of infection across P.R. China at high spatial resolution (over a grid with grid cell size of 5×5 km.We obtained clonorchiasis survey data at 633 unique locations in P.R. China. We observed that the risk of C. sinensis infection increased over time, particularly from 2005 onwards. We estimate that around 14.8 million (95% Bayesian credible interval 13.8-15.8 million people in P.R. China were infected with C. sinensis in 2010. Highly endemic areas (≥ 20% were concentrated in southern and northeastern parts of the country. The provinces with the highest risk of infection and the largest number of infected people were Guangdong, Guangxi, and Heilongjiang.Our results provide spatially relevant information for guiding clonorchiasis control interventions in P.R. China. The trend toward higher risk of C. sinensis infection in the recent past urges the Chinese government to

  4. Shattering Man’s Fundamental Assumptions in Don DeLillo’s Falling Man

    Directory of Open Access Journals (Sweden)

    Hazim Adnan Hashim

    2016-09-01

    Full Text Available The present study addresses effects of traumatic events such as the September 11 attacks on victims’ fundamental assumptions. These beliefs or assumptions provide individuals with expectations about the world and their sense of self-worth. Thus, they ground people’s sense of security, stability, and orientation. The September 11 terrorist attacks in the U.S.A. were very tragic for Americans because this fundamentally changed their understandings about many aspects in life. The attacks led many individuals to build new kind of beliefs and assumptions about themselves and the world. Many writers have written about the human ordeals that followed this incident. Don DeLillo’s Falling Man reflects the traumatic repercussions of this disaster on Americans’ fundamental assumptions. The objective of this study is to examine the novel from the traumatic perspective that has afflicted the victims’ fundamental understandings of the world and the self. Individuals’ fundamental understandings could be changed or modified due to exposure to certain types of events like war, terrorism, political violence or even the sense of alienation. The Assumptive World theory of Ronnie Janoff-Bulman will be used as a framework to study the traumatic experience of the characters in Falling Man. The significance of the study lies in providing a new perception to the field of trauma that can help trauma victims to adopt alternative assumptions or reshape their previous ones to heal from traumatic effects.

  5. Post-traumatic stress and world assumptions: the effects of religious coping.

    Science.gov (United States)

    Zukerman, Gil; Korn, Liat

    2014-12-01

    Religiosity has been shown to moderate the negative effects of traumatic event experiences. The current study was designed to examine the relationship between post-traumatic stress (PTS) following traumatic event exposure; world assumptions defined as basic cognitive schemas regarding the world; and self and religious coping conceptualized as drawing on religious beliefs and practices for understanding and dealing with life stressors. This study examined 777 Israeli undergraduate students who completed several questionnaires which sampled individual world assumptions and religious coping in addition to measuring PTS, as manifested by the PTSD check list. Results indicate that positive religious coping was significantly associated with more positive world assumptions, while negative religious coping was significantly associated with more negative world assumptions. Additionally, negative world assumptions were significantly associated with more avoidance symptoms, while reporting higher rates of traumatic event exposure was significantly associated with more hyper-arousal. These findings suggest that religious-related cognitive schemas directly affect world assumptions by creating protective shields that may prevent the negative effects of confronting an extreme negative experience.

  6. Assessing framing assumptions in quantitative health impact assessments: a housing intervention example.

    Science.gov (United States)

    Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M

    2013-09-01

    Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Impacts of cloud overlap assumptions on radiative budgets and heating fields in convective regions

    Science.gov (United States)

    Wang, XiaoCong; Liu, YiMin; Bao, Qing

    2016-01-01

    Impacts of cloud overlap assumptions on radiative budgets and heating fields are explored with the aid of a cloud-resolving model (CRM), which provided cloud geometry as well as cloud micro and macro properties. Large-scale forcing data to drive the CRM are from TRMM Kwajalein Experiment and the Global Atmospheric Research Program's Atlantic Tropical Experiment field campaigns during which abundant convective systems were observed. The investigated overlap assumptions include those that were traditional and widely used in the past and the one that was recently addressed by Hogan and Illingworth (2000), in which the vertically projected cloud fraction is expressed by a linear combination of maximum and random overlap, with the weighting coefficient depending on the so-called decorrelation length Lcf. Results show that both shortwave and longwave cloud radiative forcings (SWCF/LWCF) are significantly underestimated under maximum (MO) and maximum-random (MRO) overlap assumptions, whereas remarkably overestimated under the random overlap (RO) assumption in comparison with that using CRM inherent cloud geometry. These biases can reach as high as 100 Wm- 2 for SWCF and 60 Wm- 2 for LWCF. By its very nature, the general overlap (GenO) assumption exhibits an encouraging performance on both SWCF and LWCF simulations, with the biases almost reduced by 3-fold compared with traditional overlap assumptions. The superiority of GenO assumption is also manifested in the simulation of shortwave and longwave radiative heating fields, which are either significantly overestimated or underestimated under traditional overlap assumptions. The study also pointed out the deficiency of constant assumption on Lcf in GenO assumption. Further examinations indicate that the CRM diagnostic Lcf varies among different cloud types and tends to be stratified in the vertical. The new parameterization that takes into account variation of Lcf in the vertical well reproduces such a relationship and

  8. Detecting and accounting for violations of the constancy assumption in non-inferiority clinical trials.

    Science.gov (United States)

    Koopmeiners, Joseph S; Hobbs, Brian P

    2018-05-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.

  9. How geostatistics can help you find lead and galvanized water service lines: The case of Flint, MI.

    Science.gov (United States)

    Goovaerts, Pierre

    2017-12-01

    In the aftermath of Flint drinking water crisis, most US cities have been scrambling to locate all lead service lines (LSLs) in their water supply systems. This information, which is most often inaccurate or lacking, is critical to assess compliance with the Lead and Copper Rule and to plan the replacement of lead and galvanized service lines (GSLs) as currently under way in Flint. This paper presents the first geospatial approach to predict the likelihood that a home has a LSL or GSL based on neighboring field data (i.e., house inspection) and secondary information (i.e., construction year and city records). The methodology is applied to the City of Flint where 3254 homes have been inspected by the Michigan Department of Environmental Quality to identify service line material. GSLs and LSLs were mostly observed in houses built prior to 1934 and during World War II, respectively. City records led to the over-identification of LSLs, likely because old records were not updated as these lines were being replaced. Indicator semivariograms indicated that both types of service line are spatially clustered with a range of 1.4km for LSLs and 2.8km for GSLs. This spatial autocorrelation was integrated with secondary data using residual indicator kriging to predict the probability of finding each type of material at the tax parcel level. Cross-validation analysis using Receiver Operating Characteristic (ROC) Curves demonstrated the greater accuracy of the kriging model relative to the current approach targeting houses built in the forties; in particular as more field data become available. Anticipated rates of false positives and percentages of detection were computed for different sampling strategies. This approach is flexible enough to accommodate additional sources of information, such as local code and regulatory changes, historical permit records, maintenance and operation records, or customer self-reporting. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Geostatistics for Large Datasets

    KAUST Repository

    Sun, Ying

    2011-10-31

    Each chapter should be preceded by an abstract (10–15 lines long) that summarizes the content. The abstract will appear onlineat www.SpringerLink.com and be available with unrestricted access. This allows unregistered users to read the abstract as a teaser for the complete chapter. As a general rule the abstracts will not appear in the printed version of your book unless it is the style of your particular book or that of the series to which your book belongs. Please use the ’starred’ version of the new Springer abstractcommand for typesetting the text of the online abstracts (cf. source file of this chapter template abstract) and include them with the source files of your manuscript. Use the plain abstractcommand if the abstract is also to appear in the printed version of the book.

  11. World assumptions, posttraumatic stress and quality of life after a natural disaster: A longitudinal study

    Science.gov (United States)

    2012-01-01

    Background Changes in world assumptions are a fundamental concept within theories that explain posttraumatic stress disorder. The objective of the present study was to gain a greater understanding of how changes in world assumptions are related to quality of life and posttraumatic stress symptoms after a natural disaster. Methods A longitudinal study of 574 Norwegian adults who survived the Southeast Asian tsunami in 2004 was undertaken. Multilevel analyses were used to identify which factors at six months post-tsunami predicted quality of life and posttraumatic stress symptoms two years post-tsunami. Results Good quality of life and posttraumatic stress symptoms were negatively related. However, major differences in the predictors of these outcomes were found. Females reported significantly higher quality of life and more posttraumatic stress than men. The association between level of exposure to the tsunami and quality of life seemed to be mediated by posttraumatic stress. Negative perceived changes in the assumption “the world is just” were related to adverse outcome in both quality of life and posttraumatic stress. Positive perceived changes in the assumptions “life is meaningful” and “feeling that I am a valuable human” were associated with higher levels of quality of life but not with posttraumatic stress. Conclusions Quality of life and posttraumatic stress symptoms demonstrate differences in their etiology. World assumptions may be less specifically related to posttraumatic stress than has been postulated in some cognitive theories. PMID:22742447

  12. Why 1D electrical resistivity techniques can result in inaccurate siting of boreholes in hard rock aquifers and why electrical resistivity tomography must be preferred: the example of Benin, West Africa

    Science.gov (United States)

    Alle, Iboukoun Christian; Descloitres, Marc; Vouillamoz, Jean-Michel; Yalo, Nicaise; Lawson, Fabrice Messan Amen; Adihou, Akonfa Consolas

    2018-03-01

    Hard rock aquifers are of particular importance for supplying people with drinking water in Africa and in the world. Although the common use of one-dimensional (1D) electrical resistivity techniques to locate drilling site, the failure rate of boreholes is usually high. For instance, about 40% of boreholes drilled in hard rock aquifers in Benin are unsuccessful. This study investigates why the current use of 1D techniques (e.g. electrical profiling and electrical sounding) can result in inaccurate siting of boreholes, and checks the interest and the limitations of the use of two-dimensional (2D) Electrical Resistivity Tomography (ERT). Geophysical numerical modeling and comprehensive 1D and 2D resistivity surveys were carried out in hard rock aquifers in Benin. The experiments carried out at 7 sites located in different hard rock groups confirmed the results of the numerical modeling: the current use of 1D techniques can frequently leads to inaccurate siting, and ERT better reveals hydrogeological targets such as thick weathered zone (e.g. stratiform fractured layer and preferential weathering associated with subvertical fractured zone). Moreover, a cost analysis demonstrates that the use of ERT can save money at the scale of a drilling programme if ERT improves the success rate by only 5% as compared to the success rate obtained with 1D techniques. Finally, this study demonstrates, using the example of Benin, that the use of electrical resistivity profiling and sounding for siting boreholes in weathered hard rocks of western Africa should be discarded and replaced by the use of ERT technique, more efficient.

  13. Prediction of spatial soil property information from ancillary sensor data using ordinary linear regression: Model derivations, residual assumptions and model validation tests

    Science.gov (United States)

    Geospatial measurements of ancillary sensor data, such as bulk soil electrical conductivity or remotely sensed imagery data, are commonly used to characterize spatial variation in soil or crop properties. Geostatistical techniques like kriging with external drift or regression kriging are often use...

  14. Some Finite Sample Properties and Assumptions of Methods for Determining Treatment Effects

    DEFF Research Database (Denmark)

    Petrovski, Erik

    2016-01-01

    of this paper, three popular methods for determining treatment effects were chosen: ordinary least squares regression, propensity score matching, and inverse probability weighting. The assumptions and properties tested across these methods are: unconfoundedness, differences in average treatment effects......There is a growing interest in determining the exact effects of policies, programs, and other social interventions within the social sciences. In order to do so, researchers have a variety of econometric techniques at their disposal. However, the choice between them may be obscure. In this paper, I...... will compare assumptions and properties of select methods for determining treatment effects with Monte Carlo simulation. The comparison will highlight the pros and cons of using one method over another and the assumptions that researchers need to make for the method they choose. To limit the scope...

  15. Potential of deterministic and geostatistical rainfall interpolation under high rainfall variability and dry spells: case of Kenya's Central Highlands

    Science.gov (United States)

    Kisaka, M. Oscar; Mucheru-Muna, M.; Ngetich, F. K.; Mugwe, J.; Mugendi, D.; Mairura, F.; Shisanya, C.; Makokha, G. L.

    2016-04-01

    digital elevation model in ArcGIS environment. Validation of the selected interpolation methods were based on goodness of fit between gauged (observed) and generated rainfall derived from residual errors statistics, coefficient of determination (R 2), mean absolute errors (MAE) and root mean square error (RMSE) statistics. Analyses showed 90 % chance of below cropping-threshold rainfall (500 mm) exceeding 258.1 mm during short rains in Embu for 1 year return period. Rainfall variability was found to be high in seasonal amounts (e.g. coefficient of variation (CV) = 0.56, 0.47, 0.59) and in number of rainy days (e.g. CV = 0.88, 0.53) in Machang'a and Kiritiri, respectively. Monthly rainfall variability was found to be equally high during April and November (e.g. CV = 0.48, 0.49 and 0.76) with high probabilities (0.67) of droughts exceeding 15 days in Machang'a. Dry spell probabilities within growing months were high, e.g. 81 and 60 % in Machang'a and Embu, respectively. Kriging interpolation method emerged as the most appropriate geostatistical interpolation technique suitable for spatial rainfall maps generation for the study region.

  16. Modelagem geoestatística da infecção por Ascaris lumbricoides Geostatistical modeling of Ascaris lumbricoides infection

    Directory of Open Access Journals (Sweden)

    Bruno de Paula Menezes Drumond Fortes

    2004-06-01

    Full Text Available O estudo tem por objetivo modelar a distribuição espacial da ocorrência de ascaríase, utilizando mapas de risco mediante técnicas de geoprocessamento e análise geoestatística. Com base no banco de dados do PAISQUA, foram selecionados 19 setores censitários do Rio de Janeiro. Foram amostradas e georreferenciadas, no centróide de seu respectivo domicílio, 1.550 crianças com idade de 1 a 9 anos. Mapas de risco de Ascaris lumbricoides foram gerados usando krigagem indicadora. Com base na validação cruzada, os valores estimados foram comparados aos observados por intermédio da curva ROC. Um modelo de semivariograma isotrópico esférico com alcance de 30m e efeito pepita de 50% foi empregado na krigagem ordinária indicadora para a construção de um mapa de probabilidade de infecção por A. lumbricoides. A acurácia global, mensurada por meio da área sob a curva ROC, mostrou-se significativa. O uso da krigagem ordinária indicadora permitiu a modelagem de mapas de risco valendo-se da amostra de uma variável indicadora. O emprego das técnicas de análise estatística espacial mostrou-se adequado na predição da ocorrência do fenômeno, não ficando restrita a delimitações político-administrativas da região.The following study intends to model the spatial distribution of ascariasis, through the use of geoprocessing and geostatistic analysis. The database used in the study was taken from the PAISQUA project, including a coproparasitologic and domiciliary survey, conducted in 19 selected census tracts of Rio de Janeiro State, Brazil, randomly selecting a group of 1,550 children aged 1 to 9 years old plotting them in their respective domicile's centroids. Risk maps of Ascaris lumbricoides were generated by indicator kriging. The estimated and observed values from the cross-validation were compared using a ROC curve. An isotropic spherical semivariogram model with a range of 30m and nugget effect of 50% was employed in ordinary

  17. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    Science.gov (United States)

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  18. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    Science.gov (United States)

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus.

    Science.gov (United States)

    Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel

    2017-10-01

    The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  20. Clarification of assumptions in the relationship between the Bayes Decision Rule and the whitened cosine similarity measure.

    Science.gov (United States)

    Liu, Chengjun

    2008-06-01

    This paper first clarifies Assumption 3 (which misses a constant) and Assumption 4 (where the whitened pattern vectors refer to the whitened means) in paper "The Bayes Decision Rule Induced Similarity Measures" (IEEE Transactions on Pattern Analysis and Machine Intelligence), vol. 29, no. 6, pp. 1086-1090, 2007), and then provides examples to show that the assumptions after the clarification are consistent.

  1. A novel geotechnical/geostatistical approach for exploration and production of natural gas from multiple geologic strata, Phase 1. Volume 2, Geology and engineering

    Energy Technology Data Exchange (ETDEWEB)

    Overbey, W.K. Jr.; Reeves, T.K.; Salamy, S.P.; Locke, C.D.; Johnson, H.R.; Brunk, R.; Hawkins, L. [BDM Engineering Services Co., Morgantown, WV (United States)

    1991-05-01

    This research program has been designed to develop and verify a unique geostatistical approach for finding natural gas resources. The project has been conducted by Beckley College, Inc., and BDM Engineering Services Company (BDMESC) under contract to the US Department of Energy (DOE), Morgantown Energy Technology Center (METC). This section, Volume II, contains a detailed discussion of the methodology used and the geological and production information collected and analyzed for this study. A companion document, Volume 1, provides an overview of the program, technique and results of the study. In combination, Volumes I and II cover the completion of the research undertaken under Phase I of this DOE project, which included the identification of five high-potential sites for natural gas production on the Eccles Quadrangle, Raleigh County, West Virginia. Each of these sites was selected for its excellent potential for gas production from both relatively shallow coalbeds and the deeper, conventional reservoir formations.

  2. Identifying and closing gaps in environmental monitoring by means of metadata, ecological regionalization and geostatistics using the UNESCO biosphere reserve Rhoen (Germany) as an example.

    Science.gov (United States)

    Schröder, Winfried; Pesch, Roland; Schmidt, Gunther

    2006-03-01

    In Germany, environmental monitoring is intended to provide a holistic view of the environmental condition. To this end the monitoring operated by the federal states must use harmonized, resp., standardized methods. In addition, the monitoring sites should cover the ecoregions without any geographical gaps, the monitoring design should have no gaps in terms of ecologically relevant measurement parameters, and the sample data should be spatially without any gaps. This article outlines the extent to which the Rhoen Biosphere Reserve, occupying a part of the German federal states of Bavaria, Hesse and Thuringia, fulfills the listed requirements. The investigation considered collection, data banking and analysis of monitoring data and metadata, ecological regionalization and geostatistics. Metadata on the monitoring networks were collected by questionnaires and provided a complete inventory and description of the monitoring activities in the reserve and its surroundings. The analysis of these metadata reveals that most of the monitoring methods are harmonized across the boundaries of the three federal states the Rhoen is part of. The monitoring networks that measure precipitation, surface water levels, and groundwater quality are particularly overrepresented in the central ecoregions of the biosphere reserve. Soil monitoring sites are more equally distributed within the ecoregions of the Rhoen. The number of sites for the monitoring of air pollutants is not sufficient to draw spatially valid conclusions. To fill these spatial gaps, additional data on the annual average values of the concentrations of air pollutants from monitoring sites outside of the biosphere reserve had therefore been subject to geostatistical analysis and estimation. This yields valid information on the spatial patterns and temporal trends of air quality. The approach illustrated is applicable to similar cases, as, for example, the harmonization of international monitoring networks.

  3. A comparison of geostatistically-based inverse techniques for use in performance assessment analysis at the WIPP site results from test case No. 1

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Gallegos, D.P.

    1993-01-01

    The groundwater flow pathway in the Culebra Dolomite aquifer at the Waste Isolation Pilot Plant (WIPP) has been identified as a potentially important pathway for radionuclide migration to the accessible environment. Consequently, uncertainties in the models used to described flow and transport in the Culebra need to be addressed. A 'Geostatistics Test Problem' is being developed as a highly complex synthetic data set; the intent is for the ensemble of these data sets span the range of possible conceptual models of groundwater flow at the WIPP site. The Test Problem analysis approach is to use a comparison of the probabilistic groundwater travel time (GWTT) estimates produced by each technique as the basis for the evaluation. Participants are given observations of head and transmissivity and are asked to develop stochastic models of groundwater flow for the synthetic system. Cumulative distribution functions (CDFs) of groundwater flow (computed via particle tracking) are constructed using the head and transmissivity data generated through the application of each technique; one semi-analytical method generates the CDFs of groundwater flow directly. This paper describes the results from Test Case No 1. Of the five techniques compared, those based on the linearized form of the groundwater flow equation exhibited less bias and less spread in their GWTT distribution functions; the semi-analytical method had the least bias. While the results are not sufficient to make generalizations about which techniques may be better suited for the WIPP PA (only one test case has been exercised), analysis of the data from this test case provides some indication about the relative importance of other aspects of the flow modeling (besides inverse method or geostatistical approach) in PA. These ancillary analyses examine the effect of gridding and the effect of boundary conditions on the groundwater travel time estimates

  4. Applications of stochastic models and geostatistical analyses to study sources and spatial patterns of soil heavy metals in a metalliferous industrial district of China

    Energy Technology Data Exchange (ETDEWEB)

    Zhong, Buqing; Liang, Tao, E-mail: liangt@igsnrr.ac.cn; Wang, Lingqing; Li, Kexin

    2014-08-15

    An extensive soil survey was conducted to study pollution sources and delineate contamination of heavy metals in one of the metalliferous industrial bases, in the karst areas of southwest China. A total of 597 topsoil samples were collected and the concentrations of five heavy metals, namely Cd, As (metalloid), Pb, Hg and Cr were analyzed. Stochastic models including a conditional inference tree (CIT) and a finite mixture distribution model (FMDM) were applied to identify the sources and partition the contribution from natural and anthropogenic sources for heavy metal in topsoils of the study area. Regression trees for Cd, As, Pb and Hg were proved to depend mostly on indicators of anthropogenic activities such as industrial type and distance from urban area, while the regression tree for Cr was found to be mainly influenced by the geogenic characteristics. The FMDM analysis showed that the geometric means of modeled background values for Cd, As, Pb, Hg and Cr were close to their background values previously reported in the study area, while the contamination of Cd and Hg were widespread in the study area, imposing potentially detrimental effects on organisms through the food chain. Finally, the probabilities of single and multiple heavy metals exceeding the threshold values derived from the FMDM were estimated using indicator kriging (IK) and multivariate indicator kriging (MVIK). The high probabilities exceeding the thresholds of heavy metals were associated with metalliferous production and atmospheric deposition of heavy metals transported from the urban and industrial areas. Geostatistics coupled with stochastic models provide an effective way to delineate multiple heavy metal pollution to facilitate improved environmental management. - Highlights: • Conditional inference tree can identify variables controlling metal distribution. • Finite mixture distribution model can partition natural and anthropogenic sources. • Geostatistics with stochastic models

  5. Estimating the number of cases of podoconiosis in Ethiopia using geostatistical methods [version 2; referees: 3 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Kebede Deribe

    2017-12-01

    Full Text Available Background: In 2011, the World Health Organization recognized podoconiosis as one of the neglected tropical diseases. Nonetheless, the  magnitude of podoconiosis and the geographical distribution of the disease is poorly understood. Based on a nationwide mapping survey and geostatistical modelling, we predict the prevalence of podoconiosis and estimate the number of cases across Ethiopia. Methods: We used nationwide data collected in Ethiopia between 2008 and 2013. Data were available for 141,238 individuals from 1,442 communities in 775 districts from all nine regional states and two city administrations. We developed a geostatistical model of podoconiosis prevalence among adults (individuals aged 15 years or above, by combining environmental factors. The number of people with podoconiosis was then estimated using a gridded map of adult population density for 2015. Results: Podoconiosis is endemic in 345 districts in Ethiopia: 144 in Oromia, 128 in Southern Nations, Nationalities and People’s [SNNP], 64 in Amhara, 4 in Benishangul Gumuz, 4 in Tigray and 1 in Somali Regional State. Nationally, our estimates suggest that 1,537,963 adults (95% confidence intervals, 290,923-4,577,031 adults were living with podoconiosis in 2015. Three regions (SNNP, Oromia and Amhara contributed 99% of the cases. The highest proportion of individuals with podoconiosis resided in the SNNP (39%, while 32% and 29% of people with podoconiosis resided in Oromia and Amhara Regional States, respectively. Tigray and Benishangul Gumuz Regional States bore lower burdens, and in the remaining regions, podoconiosis was almost non-existent.  Conclusions: The estimates of podoconiosis cases presented here based upon the combination of currently available epidemiological data and a robust modelling approach clearly show that podoconiosis is highly endemic in Ethiopia. Given the presence of low cost prevention, and morbidity management and disability prevention services, it is

  6. Evaluation of Contaminations and Sources of Heavy Metals in Sediments at Samrak Delta of Nakdong River in Busan, Korea Using Geostatistical Methods

    Science.gov (United States)

    Chung, Sang Yong; Senapathi, Venkatramanan; Khakimov, Elyor; Selvam, Sekar; Oh, Yun Yeong

    2016-04-01

    This research used several geostatistical methods to assess heavy metal contaminations and their sources of sediments at Samrak Delta of Nakdong River in Busan, Korea. The mean concentration of heavy metals in sediments were Fe (16.42 %), Al (15.56 %), Mn (0.31 %), Zn, Pb, Cr (0.03 %), Ni (0.02 %) and Cu (0.008 %), which were mainly attributed to the intense industrial and irrigation activities, and also geogenic sources. Groundwater in the sediments also contains the high concentrations of heavy metals such as Fe and Mn. Canonical correlation analysis (CCA) exhibited a significant relationship between physicochemical parameters (sand, silt, clay, TOC, CaCO3) and heavy metals (Fe, Al, Mn, Zn, Pb, Cr, Ni, Cu), and the importance of physicochemical parameters in regulating the amount of heavy metals in sediments. Artificial neural network (ANN) showed a good correlation and model efficiency for simulated outputs except Fe, Pb and Zn. Silt, Clay, TOC and CaCO3 controlled the concentrations of heavy metals in sediments. Principal component analysis (PCA) produced two factor loadings of PCA 1 of Fe, Mn, Pb, TOC, Cr, silt and Al, 75.4 % in variance, and PCA 2 of Cu, Ni, Zn and CaCO3, 24.6 % in variance. It suggested that heavy metals were originated from geogenic sources and effluents from industries. Cluster analysis (CA) was helpful for the classification of contamination sources of heavy metals. This study suggests that geostatistical techniques are essentially necessary for the effective management of heavy metal contaminations and policy decision making processes to reduce the contamination level of heavy metals in deltaic region.

  7. Applications of stochastic models and geostatistical analyses to study sources and spatial patterns of soil heavy metals in a metalliferous industrial district of China

    International Nuclear Information System (INIS)

    Zhong, Buqing; Liang, Tao; Wang, Lingqing; Li, Kexin

    2014-01-01

    An extensive soil survey was conducted to study pollution sources and delineate contamination of heavy metals in one of the metalliferous industrial bases, in the karst areas of southwest China. A total of 597 topsoil samples were collected and the concentrations of five heavy metals, namely Cd, As (metalloid), Pb, Hg and Cr were analyzed. Stochastic models including a conditional inference tree (CIT) and a finite mixture distribution model (FMDM) were applied to identify the sources and partition the contribution from natural and anthropogenic sources for heavy metal in topsoils of the study area. Regression trees for Cd, As, Pb and Hg were proved to depend mostly on indicators of anthropogenic activities such as industrial type and distance from urban area, while the regression tree for Cr was found to be mainly influenced by the geogenic characteristics. The FMDM analysis showed that the geometric means of modeled background values for Cd, As, Pb, Hg and Cr were close to their background values previously reported in the study area, while the contamination of Cd and Hg were widespread in the study area, imposing potentially detrimental effects on organisms through the food chain. Finally, the probabilities of single and multiple heavy metals exceeding the threshold values derived from the FMDM were estimated using indicator kriging (IK) and multivariate indicator kriging (MVIK). The high probabilities exceeding the thresholds of heavy metals were associated with metalliferous production and atmospheric deposition of heavy metals transported from the urban and industrial areas. Geostatistics coupled with stochastic models provide an effective way to delineate multiple heavy metal pollution to facilitate improved environmental management. - Highlights: • Conditional inference tree can identify variables controlling metal distribution. • Finite mixture distribution model can partition natural and anthropogenic sources. • Geostatistics with stochastic models

  8. What Mathematics Education Can Learn from Art: The Assumptions, Values, and Vision of Mathematics Education

    Science.gov (United States)

    Dietiker, Leslie

    2015-01-01

    Elliot Eisner proposed that educational challenges can be met by applying an artful lens. This article draws from Eisner's proposal to consider the assumptions, values, and vision of mathematics education by theorizing mathematics curriculum as an art form. By conceptualizing mathematics curriculum (both in written and enacted forms) as stories…

  9. Letters: Milk and Mortality : Study used wrong assumption about galactose content of fermented dairy products

    NARCIS (Netherlands)

    Hettinga, K.A.

    2014-01-01

    Michaëlsson and colleagues’ proposed mechanism for the effect of milk intake on the risk of mortality and fractures is based on the assumption that fermented dairy products (which had the opposite effects to those of non-fermented milk) are free of galactose.1 For most fermented dairy products,

  10. 76 FR 29675 - Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian Country

    Science.gov (United States)

    2011-05-23

    ... Part 50 RIN 1105-AB38 Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian... State criminal jurisdiction under Public Law 280 (18 U.S.C. 1162(a)) to request that the United States accept concurrent criminal jurisdiction within the tribe's Indian country, and for the Attorney General...

  11. Mutual assumptions and facts about nondisclosure among clinical supervisors and students in group supervision

    DEFF Research Database (Denmark)

    Nielsen, Geir Høstmark; Skjerve, Jan; Jacobsen, Claus Haugaard

    2009-01-01

    In the two preceding papers of this issue of Nordic Psychology the authors report findings from a study of nondisclosure among student therapists and clinical supervisors. The findings were reported separately for each group. In this article, the two sets of findings are held together and compared......, so as to draw a picture of mutual assumptions and facts about nondisclosure among students and supervisors....

  12. Sensitivity Analysis and Bounding of Causal Effects with Alternative Identifying Assumptions

    Science.gov (United States)

    Jo, Booil; Vinokur, Amiram D.

    2011-01-01

    When identification of causal effects relies on untestable assumptions regarding nonidentified parameters, sensitivity of causal effect estimates is often questioned. For proper interpretation of causal effect estimates in this situation, deriving bounds on causal parameters or exploring the sensitivity of estimates to scientifically plausible…

  13. Metaphorical Mirror: Reflecting on Our Personal Pursuits to Discover and Challenge Our Teaching Practice Assumptions

    Science.gov (United States)

    Wagenheim, Gary; Clark, Robert; Crispo, Alexander W.

    2009-01-01

    The goal of this paper is to examine how our personal pursuits--hobbies, activities, interests, and sports--can serve as a metaphor to reflect who we are in our teaching practice. This paper explores the notion that our favorite personal pursuits serve as metaphorical mirrors to reveal deeper assumptions we hold about the skills, values, and…

  14. 76 FR 21252 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-04-15

    ... covered by title IV of the Employee Retirement Income Security Act of 1974. DATES: Effective May 1, 2011...--for paying plan benefits under terminating single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in Appendix B to Part 4022 to...

  15. 76 FR 81966 - Agency Information Collection Activities; Proposed Collection; Comments Requested; Assumption of...

    Science.gov (United States)

    2011-12-29

    ... Indian country is subject to State criminal jurisdiction under Public Law 280 (18 U.S.C. 1162(a)) to... Collection; Comments Requested; Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian Country ACTION: 60-Day notice of information collection under review. The Department of Justice...

  16. 77 FR 75549 - Allocation of Assets in Single-Employer Plans; Interest Assumptions for Valuing Benefits

    Science.gov (United States)

    2012-12-21

    ... Plans to prescribe interest assumptions for valuation dates in the first quarter of 2013. The interest... plan benefits under terminating single-employer plans covered by title IV of the Employee Retirement... regulation are updated quarterly and are intended to reflect current conditions in the financial and annuity...

  17. Common-Sense Chemistry: The Use of Assumptions and Heuristics in Problem Solving

    Science.gov (United States)

    Maeyer, Jenine Rachel

    2013-01-01

    Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build…

  18. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  19. A computational model to investigate assumptions in the headturn preference procedure

    NARCIS (Netherlands)

    Bergmann, C.; Bosch, L.F.M. ten; Fikkert, J.P.M.; Boves, L.W.J.

    2013-01-01

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2)

  20. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid