WorldWideScience

Sample records for stratigraphy predictive model

  1. Predicting snowpack stratigraphy in forested environments

    Science.gov (United States)

    Andreadis, K. M.; Lettenmaier, D. P.

    2009-04-01

    The interaction of forest canopies with snow accumulation and ablation processes is critical to the hydrology of many mid- and high-latitude areas. The layered character of snowpacks increases the complexity of representing these processes and deconvolving the return signal from remote sensors. However, it offers the opportunity to infer the metamorphic signature of the snowpack and to extract additional information by combining multiple frequencies (visible and passive/active microwave). Implementation of this approach requires knowledge of the stratigraphy of snowpack microphysical properties (temperature, density, and grain size), which as a practical matter can only be produced by predictive models. A mass and energy balance model for snow accumulation and ablation processes in forested environments was developed utilizing extensive measurements of snow interception and release in a maritime mountainous site in Oregon. A multiple layer component was added to the model that also takes into account snowpack stratigraphy resulting from snow densification, vapor transport and grain growth. The model, was evaluated using two years of weighing lysimeter data and was able to reproduce the SWE evolution throughout both winters beneath the canopy as well as the nearby clearing. The model was also evaluated using measurements from a BOReal Ecosystem-Atmosphere Study (BOREAS) field site in Canada to test the robustness of the canopy snow interception algorithm in a much different climate. Simulated SWE was relatively close to the observations for the forested sites, with discrepancies evident in some cases. Although the model formulation appeared robust for both types of climates, sensitivity to parameters such as snow roughness length, maximum interception capacity and number of layers suggested the magnitude of improvements of SWE simulations that might be achieved by calibration. Finally, the model's ability to replicate large-scale snowpack layer features and their

  2. Integrating sequence stratigraphy and rock-physics to interpret seismic amplitudes and predict reservoir quality

    Science.gov (United States)

    Dutta, Tanima

    This dissertation focuses on the link between seismic amplitudes and reservoir properties. Prediction of reservoir properties, such as sorting, sand/shale ratio, and cement-volume from seismic amplitudes improves by integrating knowledge from multiple disciplines. The key contribution of this dissertation is to improve the prediction of reservoir properties by integrating sequence stratigraphy and rock physics. Sequence stratigraphy has been successfully used for qualitative interpretation of seismic amplitudes to predict reservoir properties. Rock physics modeling allows quantitative interpretation of seismic amplitudes. However, often there is uncertainty about selecting geologically appropriate rock physics model and its input parameters, away from the wells. In the present dissertation, we exploit the predictive power of sequence stratigraphy to extract the spatial trends of sedimentological parameters that control seismic amplitudes. These spatial trends of sedimentological parameters can serve as valuable constraints in rock physics modeling, especially away from the wells. Consequently, rock physics modeling, integrated with the trends from sequence stratigraphy, become useful for interpreting observed seismic amplitudes away from the wells in terms of underlying sedimentological parameters. We illustrate this methodology using a comprehensive dataset from channelized turbidite systems, deposited in minibasin settings in the offshore Equatorial Guinea, West Africa. First, we present a practical recipe for using closed-form expressions of effective medium models to predict seismic velocities in unconsolidated sandstones. We use an effective medium model that combines perfectly rough and smooth grains (the extended Walton model), and use that model to derive coordination number, porosity, and pressure relations for P and S wave velocities from experimental data. Our recipe provides reasonable fits to other experimental and borehole data, and specifically

  3. Application of sequence stratigraphy to carbonate reservoir prediction, Early Palaeozoic eastern Warburton basin, South Australia

    Energy Technology Data Exchange (ETDEWEB)

    Xiaowen S.; Stuart, W.J.

    1996-12-31

    The Early Palaeozoic Warburton Basin underlies the gas and oil producing Cooper and Eromanga Basins. Postdepositional tectonism created high potential fracture porosities, complicating the stratigraphy and making reservoir prediction difficult. Sequence stratigraphy integrating core, cuttings, well-log, seismic and biostratigraphic data has recognized a carbonate-dominated to mixed carbonate/siliciclastic supersequence comprising several depositional sequences. Biostratigraphy based on trilobites and conodonts ensures reliable well and seismic correlations across structurally complex areas. Lithofacies interpretation indicates sedimentary environments ranging from carbonate inner shelf, peritidal, shelf edge, deep outer shelf and slope to basin. Log facies show gradually upward shallowing trends or abrupt changes indicating possible sequence boundaries. With essential depositional models and sequence analysis from well data, seismic facies suggest general reflection configurations including parallel-continuous layered patterns indicating uniform neuritic shelf, and mounded structures suggesting carbonate build-ups and pre-existing volcanic relief. Seismic stratigraphy also reveals inclined slope and onlapping margins of a possibly isolated platform geometry. The potential reservoirs are dolomitized carbonates containing oomoldic, vuggy, intercrystalline and fracture porosities in lowstand systems tracts either on carbonate mounds and shelf crests or below shelf edge. The source rock is a deep basinal argillaceous mudstone, and the seal is fine-grained siltstone/shale of the transgressive system tract.

  4. Application of sequence stratigraphy to carbonate reservoir prediction, Early Palaeozoic eastern Warburton basin, South Australia

    Energy Technology Data Exchange (ETDEWEB)

    Xiaowen S.; Stuart, W.J.

    1996-01-01

    The Early Palaeozoic Warburton Basin underlies the gas and oil producing Cooper and Eromanga Basins. Postdepositional tectonism created high potential fracture porosities, complicating the stratigraphy and making reservoir prediction difficult. Sequence stratigraphy integrating core, cuttings, well-log, seismic and biostratigraphic data has recognized a carbonate-dominated to mixed carbonate/siliciclastic supersequence comprising several depositional sequences. Biostratigraphy based on trilobites and conodonts ensures reliable well and seismic correlations across structurally complex areas. Lithofacies interpretation indicates sedimentary environments ranging from carbonate inner shelf, peritidal, shelf edge, deep outer shelf and slope to basin. Log facies show gradually upward shallowing trends or abrupt changes indicating possible sequence boundaries. With essential depositional models and sequence analysis from well data, seismic facies suggest general reflection configurations including parallel-continuous layered patterns indicating uniform neuritic shelf, and mounded structures suggesting carbonate build-ups and pre-existing volcanic relief. Seismic stratigraphy also reveals inclined slope and onlapping margins of a possibly isolated platform geometry. The potential reservoirs are dolomitized carbonates containing oomoldic, vuggy, intercrystalline and fracture porosities in lowstand systems tracts either on carbonate mounds and shelf crests or below shelf edge. The source rock is a deep basinal argillaceous mudstone, and the seal is fine-grained siltstone/shale of the transgressive system tract.

  5. Volcanic stratigraphy: A review

    Science.gov (United States)

    Martí, Joan; Groppelli, Gianluca; Brum da Silveira, Antonio

    2018-05-01

    Volcanic stratigraphy is a fundamental component of geological mapping in volcanic areas as it yields the basic criteria and essential data for identifying the spatial and temporal relationships between volcanic products and intra/inter-eruptive processes (earth-surface, tectonic and climatic), which in turn provides greater understanding of the geological evolution of a region. Establishing precise stratigraphic relationships in volcanic successions is not only essential for understanding the past behaviour of volcanoes and for predicting how they might behave in the future, but is also critical for establishing guidelines for exploring economic and energy resources associated with volcanic systems or for reconstructing the evolution of sedimentary basins in which volcanism has played a significant role. Like classical stratigraphy, volcanic stratigraphy should also be defined using a systematic methodology that can provide an organised and comprehensive description of the temporal and spatial evolution of volcanic terrain. This review explores different methods employed in studies of volcanic stratigraphy, examines four case studies that use differing stratigraphic approaches, and recommends methods for using systematic volcanic stratigraphy based on the application of the concepts of traditional stratigraphy but adapted to the needs of volcanological environment.

  6. Modelling of cyclical stratigraphy using Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Kulatilake, P.H.S.W.

    1987-07-01

    State-of-the-art on modelling of cyclical stratigraphy using first-order Markov chains is reviewed. Shortcomings of the presently available procedures are identified. A procedure which eliminates all the identified shortcomings is presented. Required statistical tests to perform this modelling are given in detail. An example (the Oficina formation in eastern Venezuela) is given to illustrate the presented procedure. 12 refs., 3 tabs. 1 fig.

  7. Correlation between high resolution sequence stratigraphy and mechanical stratigraphy for enhanced fracture characteristic prediction

    Science.gov (United States)

    Al Kharusi, Laiyyan M.

    Sequence stratigraphy relates changes in vertical and lateral facies distribution to relative changes in sea level. These relative changes in carbonates effect early diagenesis, types of pores, cementation and dissolution patterns. As a result, in carbonates, relative changes in sea level significantly impact the lithology, porosity, diagenesis, bed and bounding surfaces which are all factors that control fracture patterns. This study explores these relationships by integrating stratigraphy with fracture analysis and petrophysical properties. A special focus is given to the relationship between mechanical boundaries and sequence stratigraphic boundaries in three different settings: (1) Mississippian strata in Sheep Mountain Anticline, Wyoming, (2) Mississippian limestones in St. Louis, Missouri, and (3) Pennsylvanian limestones intermixed with elastics in the Paradox Basin, Utah. The analysis of these sections demonstrate that a fracture hierarchy exists in relation to the sequence stratigraphic hierarchy. The majority of fractures (80%) terminate at genetic unit boundaries or the internal flooding surface that separates the transgressive from regressive hemicycle. Fractures (20%) that do not terminate at genetic unit boundaries or their internal flooding surface terminate at lower order sequence stratigraphic boundaries or their internal flooding surfaces. Secondly, the fracture spacing relates well to bed thickness in mechanical units no greater than 0.5m in thickness but with increasing bed thickness a scatter from the linear trend is observed. In the Paradox Basin the influence of strain on fracture density is illustrated by two sections measured in different strain regimes. The folded strata at Raplee Anticline has higher fracture densities than the flat-lying beds at the Honaker Trail. Cemented low porosity rocks in the Paradox Basin do not show a correlation between fracture pattern and porosity. However velocity and rock stiffness moduli's display a slight

  8. High resolution sequence stratigraphy in China

    International Nuclear Information System (INIS)

    Zhang Shangfeng; Zhang Changmin; Yin Yanshi; Yin Taiju

    2008-01-01

    Since high resolution sequence stratigraphy was introduced into China by DENG Hong-wen in 1995, it has been experienced two development stages in China which are the beginning stage of theory research and development of theory research and application, and the stage of theoretical maturity and widely application that is going into. It is proved by practices that high resolution sequence stratigraphy plays more and more important roles in the exploration and development of oil and gas in Chinese continental oil-bearing basin and the research field spreads to the exploration of coal mine, uranium mine and other strata deposits. However, the theory of high resolution sequence stratigraphy still has some shortages, it should be improved in many aspects. The authors point out that high resolution sequence stratigraphy should be characterized quantitatively and modelized by computer techniques. (authors)

  9. Mars Stratigraphy Mission

    Science.gov (United States)

    Budney, C. J.; Miller, S. L.; Cutts, J. A.

    2000-01-01

    The Mars Stratigraphy Mission lands a rover on the surface of Mars which descends down a cliff in Valles Marineris to study the stratigraphy. The rover carries a unique complement of instruments to analyze and age-date materials encountered during descent past 2 km of strata. The science objective for the Mars Stratigraphy Mission is to identify the geologic history of the layered deposits in the Valles Marineris region of Mars. This includes constraining the time interval for formation of these deposits by measuring the ages of various layers and determining the origin of the deposits (volcanic or sedimentary) by measuring their composition and imaging their morphology.

  10. Three-dimensional model of reference thermal/mechanical and hydrological stratigraphy at Yucca Mountain, southern Nevada

    International Nuclear Information System (INIS)

    Ortiz, T.S.; Williams, R.L.; Nimick, F.B.; Whittet, B.C.; South, D.L.

    1985-10-01

    The Nevada Nuclear Waste Storage Investigations (NNWSI) project is currently examining the feasibility of constructing a nuclear waste repository in the tuffs beneath Yucca Mountain. A three-dimensional model of the thermal/mechanical and hydrological reference stratigraphy at Yucca Mountain has been developed for use in performance assessment and repository design studies involving material properties data. The reference stratigraphy defines units with distinct thermal, physical, mechanical, and hydrological properties. The model is a collection of surface representations, each surface representing the base of a particular unit. The reliability of the model was evaluated by comparing the generated surfaces, existing geologic maps and cross sections, drill hole data, and geologic interpolation. Interpolation of surfaces between drill holes by the model closely matches the existing information. The top of a zone containing prevalent zeolite is defined and superimposed on the reference stratigraphy. Interpretation of the geometric relations between the zeolitic and thermal/mechanical and hydrological surfaces indicates that the zeolitic zone was established before the major portion of local fault displacement took place; however, faulting and zeolitization may have been partly concurrent. The thickness of the proposed repository host rock, the devitrified, relatively lithophysal-poor, moderately to densely welded portion of the Topopah Spring Member of the Paintbrush Tuff, was evaluated and varies from 400 to 800 ft in the repository area. The distance from the repository to groundwater level was estimated to vary from 700 to 1400 ft. 13 figs., 1 tab

  11. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2006-04-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  12. Modeling of the Sedimentary Interbedded Basalt Stratigraphy for the Idaho National Laboratory Probabilistic Seismic Hazard Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Suzette Payne

    2007-08-01

    This report summarizes how the effects of the sedimentary interbedded basalt stratigraphy were modeled in the probabilistic seismic hazard analysis (PSHA) of the Idaho National Laboratory (INL). Drill holes indicate the bedrock beneath INL facilities is composed of about 1.1 km of alternating layers of basalt rock and loosely consolidated sediments. Alternating layers of hard rock and “soft” loose sediments tend to attenuate seismic energy greater than uniform rock due to scattering and damping. The INL PSHA incorporated the effects of the sedimentary interbedded basalt stratigraphy by developing site-specific shear (S) wave velocity profiles. The profiles were used in the PSHA to model the near-surface site response by developing site-specific stochastic attenuation relationships.

  13. Stratigraphy of the crater Copernicus

    Science.gov (United States)

    Paquette, R.

    1984-01-01

    The stratigraphy of copernicus based on its olivine absorption bands is presented. Earth based spectral data are used to develop models that also employ cratering mechanics to devise theories for Copernican geomorphology. General geologic information, spectral information, upper and lower stratigraphic units and a chart for model comparison are included in the stratigraphic analysis.

  14. Global stratigraphy. [of planet Mars

    Science.gov (United States)

    Tanaka, Kenneth L.; Scott, David H.; Greeley, Ronald

    1992-01-01

    Attention is given to recent major advances in the definition and documentation of Martian stratigraphy and geology. Mariner 9 provided the images for the first global geologic mapping program, resulting in the recognition of the major geologic processes that have operated on the planet, and in the definition of the three major chronostratigraphic divisions: the Noachian, Hesperian, and Amazonian Systems. Viking Orbiter images permitted the recognition of additional geologic units and the formal naming of many formations. Epochs are assigned absolute ages based on the densities of superposed craters and crater-flux models. Recommendations are made with regard to future areas of study, namely, crustal stratigraphy and structure, the highland-lowland boundary, the Tharsis Rise, Valles Marineris, channels and valley networks, and possible Martian oceans, lakes, and ponds.

  15. 3D mechanical stratigraphy of a deformed multi-layer: Linking sedimentary architecture and strain partitioning

    Science.gov (United States)

    Cawood, Adam J.; Bond, Clare E.

    2018-01-01

    Stratigraphic influence on structural style and strain distribution in deformed sedimentary sequences is well established, in models of 2D mechanical stratigraphy. In this study we attempt to refine existing models of stratigraphic-structure interaction by examining outcrop scale 3D variations in sedimentary architecture and the effects on subsequent deformation. At Monkstone Point, Pembrokeshire, SW Wales, digital mapping and virtual scanline data from a high resolution virtual outcrop have been combined with field observations, sedimentary logs and thin section analysis. Results show that significant variation in strain partitioning is controlled by changes, at a scale of tens of metres, in sedimentary architecture within Upper Carboniferous fluvio-deltaic deposits. Coupled vs uncoupled deformation of the sequence is defined by the composition and lateral continuity of mechanical units and unit interfaces. Where the sedimentary sequence is characterized by gradational changes in composition and grain size, we find that deformation structures are best characterized by patterns of distributed strain. In contrast, distinct compositional changes vertically and in laterally equivalent deposits results in highly partitioned deformation and strain. The mechanical stratigraphy of the study area is inherently 3D in nature, due to lateral and vertical compositional variability. Consideration should be given to 3D variations in mechanical stratigraphy, such as those outlined here, when predicting subsurface deformation in multi-layers.

  16. Approach to first principles model prediction of measured WIPP [Waste Isolation Pilot Plant] in situ room closure in salt

    International Nuclear Information System (INIS)

    Munson, D.E.; Fossum, A.F.; Senseny, P.E.

    1989-01-01

    The discrepancies between predicted and measured WIPP in situ Room D closures are markedly reduced through the use of a Tresca flow potential, an improved small strain constitutive model, an improved set of material parameters, and a modified stratigraphy. 17 refs., 8 figs., 1 tab

  17. Effect of explicit representation of detailed stratigraphy on brine and gas flow at the Waste Isolation Pilot Plant

    International Nuclear Information System (INIS)

    Christian-Frear, T.L.; Webb, S.W.

    1996-04-01

    Stratigraphic units of the Salado Formation at the Waste Isolation Pilot Plant (WIPP) disposal room horizon includes various layers of halite, polyhalitic halite, argillaceous halite, clay, and anhydrite. Current models, including those used in the WIPP Performance Assessment calculations, employ a ''composite stratigraphy'' approach in modeling. This study was initiated to evaluate the impact that an explicit representation of detailed stratigraphy around the repository may have on fluid flow compared to the simplified ''composite stratigraphy'' models currently employed. Sensitivity of model results to intrinsic permeability anisotropy, interbed fracturing, two-phase characteristic curves, and gas-generation rates were studied. The results of this study indicate that explicit representation of the stratigraphy maintains higher pressures and does not allow as much fluid to leave the disposal room as compared to the ''composite stratigraphy'' approach. However, the differences are relatively small. Gas migration distances are also different between the two approaches. However, for the two cases in which explicit layering results were considerably different than the composite model (anisotropic and vapor-limited), the gas-migration distances for both models were negligible. For the cases in which gas migration distances were considerable, van Genuchten/Parker and interbed fracture, the differences between the two models were fairly insignificant. Overall, this study suggests that explicit representation of the stratigraphy in the WIPP PA models is not required for the parameter variations modeled if ''global quantities'' (e.g., disposal room pressures, net brine and gas flux into and out of disposal rooms) are the only concern

  18. Effect of explicit representation of detailed stratigraphy on brine and gas flow at the Waste Isolation Pilot Plant

    Energy Technology Data Exchange (ETDEWEB)

    Christian-Frear, T.L.; Webb, S.W. [Sandia National Labs., Albuquerque, NM (United States). Geohydrology Dept.

    1996-04-01

    Stratigraphic units of the Salado Formation at the Waste Isolation Pilot Plant (WIPP) disposal room horizon includes various layers of halite, polyhalitic halite, argillaceous halite, clay, and anhydrite. Current models, including those used in the WIPP Performance Assessment calculations, employ a ``composite stratigraphy`` approach in modeling. This study was initiated to evaluate the impact that an explicit representation of detailed stratigraphy around the repository may have on fluid flow compared to the simplified ``composite stratigraphy`` models currently employed. Sensitivity of model results to intrinsic permeability anisotropy, interbed fracturing, two-phase characteristic curves, and gas-generation rates were studied. The results of this study indicate that explicit representation of the stratigraphy maintains higher pressures and does not allow as much fluid to leave the disposal room as compared to the ``composite stratigraphy`` approach. However, the differences are relatively small. Gas migration distances are also different between the two approaches. However, for the two cases in which explicit layering results were considerably different than the composite model (anisotropic and vapor-limited), the gas-migration distances for both models were negligible. For the cases in which gas migration distances were considerable, van Genuchten/Parker and interbed fracture, the differences between the two models were fairly insignificant. Overall, this study suggests that explicit representation of the stratigraphy in the WIPP PA models is not required for the parameter variations modeled if ``global quantities`` (e.g., disposal room pressures, net brine and gas flux into and out of disposal rooms) are the only concern.

  19. The stratigraphy of Mars

    Science.gov (United States)

    Tanaka, Kenneth L.

    1986-01-01

    A global stratigraphy of Mars was developed from a global geologic map series derived from Viking images; the stratigraphy is composed of three maps. A new chronostratigraphic classification system which consists of lower, middle, and upper Noachian, Hesperian, and Amazonian systems is described. The crater-density boundaries of the chronostratigraphic units and the absolute ages of the Martian epochs aer estimated. The relative ages of major geologic units and featues are calculated and analyzed. The geologic history of Mars is summarized on the maps in terms of epochs.

  20. The Stratigraphy and Evolution of the Lunar Crust

    Science.gov (United States)

    McCallum, I. Stewart

    1998-01-01

    Reconstruction of stratigraphic relationships in the ancient lunar crust has proved to be a formidable task. The intense bombardment during the first 700 m.y. of lunar history has severely perturbed the original stratigraphy and destroyed the primary textures of all but a few nonmare rocks. However, a knowledge of the crustal stratigraphy as it existed prior to the cataclysmic bombardment about 3.9 Ga is essential to test the major models proposed for crustal origin, i.e., crystal fractionation in a global magmasphere or serial magmatism in a large number of smaller bodies. Despite the large difference in scale implicit in these two models, both require an efficient separation of plagioclase and mafic minerals to form the anorthositic crust and the mafic mantle. Despite the havoc wreaked by the large body impactors, these same impact processes have brought to the lunar surface crystalline samples derived from at least the upper half of the lunar crust, thereby providing an opportunity to reconstruct the stratigraphy in areas sampled by the Apollo missions. As noted, ejecta from the large multiring basins are dominantly, or even exclusively, of crustal origin. Given the most recent determinations of crustal thicknesses, this implies an upper limit to the depth of excavation of about 60 km. Of all the lunar samples studied, a small set has been recognized as "pristine", and within this pristine group, a small fraction have retained some vestiges of primary features formed during the earliest stages of crystallization or recrystallization prior to 4.0 Ga. We have examined a number of these samples that have retained some record of primary crystallization to deduce thermal histories from an analysis of structural, textural, and compositional features in minerals from these samples. Specifically, by quantitative modeling of (1) the growth rate and development of compositional profiles of exsolution lamellae in pyroxenes and (2) the rate of Fe-Mg ordering in

  1. Stratigraphy and paleohydrology of delta channel deposits, Jezero crater, Mars

    Science.gov (United States)

    Goudge, Timothy A.; Mohrig, David; Cardenas, Benjamin T.; Hughes, Cory M.; Fassett, Caleb I.

    2018-02-01

    The Jezero crater open-basin lake contains two well-exposed fluvial sedimentary deposits formed early in martian history. Here, we examine the geometry and architecture of the Jezero western delta fluvial stratigraphy using high-resolution orbital images and digital elevation models (DEMs). The goal of this analysis is to reconstruct the evolution of the delta and associated shoreline position. The delta outcrop contains three distinct classes of fluvial stratigraphy that we interpret, from oldest to youngest, as: (1) point bar strata deposited by repeated flood events in meandering channels; (2) inverted channel-filling deposits formed by avulsive distributary channels; and (3) a valley that incises the deposit. We use DEMs to quantify the geometry of the channel deposits and estimate flow depths of ∼7 m for the meandering channels and ∼2 m for the avulsive distributary channels. Using these estimates, we employ a novel approach for assessing paleohydrology of the formative channels in relative terms. This analysis indicates that the shift from meandering to avulsive distributary channels was associated with an approximately four-fold decrease in the water to sediment discharge ratio. We use observations of the fluvial stratigraphy and channel paleohydrology to propose a model for the evolution of the Jezero western delta. The delta stratigraphy records lake level rise and shoreline transgression associated with approximately continuous filling of the basin, followed by outlet breaching, and eventual erosion of the delta. Our results imply a martian surface environment during the period of delta formation that supplied sufficient surface runoff to fill the Jezero basin without major drops in lake level, but also with discrete flooding events at non-orbital (e.g., annual to decadal) timescales.

  2. Remanent magnetization stratigraphy of lunar cores

    Science.gov (United States)

    Banerjee, S. K.; Gingrich, D.; Marvin, J. A.

    1977-01-01

    Depth dependent fluctuations have been observed in the natural remanent magnetizations (NRM) of drive cores and drill strings from Apollo 16 and 17 missions. Partial demagnetization of unstable secondary magnetizations and identification of characteristic error signals from a core which is known to have been recently disturbed allow us to identify and isolate the stable NRM stratigraphy in double drive core 60010/60009 and drill strings 60002-60004. The observed magnetization fluctuations persist after normalization to take into account depth dependent variations in the carriers of stable NRM. We tentatively ascribe the stable NRM stratigraphy to instantaneous records of past magnetic fields at the lunar surface and suggest that the stable NRM stratigraphy technique could develop as a new relative time-stratigraphic tool, to be used with other physical measurements such as relative intensity of ferromagnetic resonance and charged particle track density to study the evolution of the lunar regolith.

  3. Depth and stratigraphy of regolith. Site descriptive modelling SDM-Site Laxemar

    International Nuclear Information System (INIS)

    Nyman, Helena; Sohlenius, Gustav; Stroemgren, Maarten; Brydsten, Lars

    2008-06-01

    At the Laxemar-Simpevarp site, numerical and descriptive modelling are performed both for the deep bedrock and for the surface systems. The surface geology and regolith depth are important parameters for e.g. hydrogeological and geochemical modelling and for the over all understanding of the area. Regolith refers to all the unconsolidated deposits overlying the bedrock. The regolith depth model (RDM) presented here visualizes the stratigraphical distribution of the regolith as well as the elevation of the bedrock surface. The model covers 280 km 2 including both terrestrial and marine areas. In the model the stratigraphy is represented by six layers (Z1-Z6) that corresponds to different types of regolith. The model is geometric and the properties of the layers are assigned by the user according to the purpose. The GeoModel program, which is an ArcGIS extension, was used for modelling the regolith depths. A detailed topographical Digital Elevation Model (DEM) and a map of Quaternary deposits were used as input to the model. Altogether 319 boreholes and 440 other stratigraphical observations were also used. Furthermore a large number of depth data interpreted from geophysical investigations were used; refraction seismic measurements from 51 profiles, 11,000 observation points from resistivity measurements and almost 140,000 points from seismic and sediment echo sounding data. The results from the refraction seismic and resistivity measurements give information about the total regolith depths, whereas most other data also give information about the stratigraphy of the regolith. Some of the used observations did not reach the bedrock surface. They do, however, describe the minimum regolith depth at each location and were therefore used where the regolith depth would have been thinner without using the observation point. A large proportion of the modelled area has a low data density and the area was therefore divided into nine domains. These domains were defined based

  4. Depth and stratigraphy of regolith. Site descriptive modelling SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Nyman, Helena (SWECO Position, Stockholm (Sweden)); Sohlenius, Gustav (Geological Survey of Sweden (SGU), Uppsala (Sweden)); Stroemgren, Maarten; Brydsten, Lars (Umeaa Univ., Umeaa (Sweden))

    2008-06-15

    At the Laxemar-Simpevarp site, numerical and descriptive modelling are performed both for the deep bedrock and for the surface systems. The surface geology and regolith depth are important parameters for e.g. hydrogeological and geochemical modelling and for the over all understanding of the area. Regolith refers to all the unconsolidated deposits overlying the bedrock. The regolith depth model (RDM) presented here visualizes the stratigraphical distribution of the regolith as well as the elevation of the bedrock surface. The model covers 280 km2 including both terrestrial and marine areas. In the model the stratigraphy is represented by six layers (Z1-Z6) that corresponds to different types of regolith. The model is geometric and the properties of the layers are assigned by the user according to the purpose. The GeoModel program, which is an ArcGIS extension, was used for modelling the regolith depths. A detailed topographical Digital Elevation Model (DEM) and a map of Quaternary deposits were used as input to the model. Altogether 319 boreholes and 440 other stratigraphical observations were also used. Furthermore a large number of depth data interpreted from geophysical investigations were used; refraction seismic measurements from 51 profiles, 11,000 observation points from resistivity measurements and almost 140,000 points from seismic and sediment echo sounding data. The results from the refraction seismic and resistivity measurements give information about the total regolith depths, whereas most other data also give information about the stratigraphy of the regolith. Some of the used observations did not reach the bedrock surface. They do, however, describe the minimum regolith depth at each location and were therefore used where the regolith depth would have been thinner without using the observation point. A large proportion of the modelled area has a low data density and the area was therefore divided into nine domains. These domains were defined based on

  5. A Geostatistical Toolset for Reconstructing Louisiana's Coastal Stratigraphy using Subsurface Boring and Cone Penetrometer Test Data

    Science.gov (United States)

    Li, A.; Tsai, F. T. C.; Jafari, N.; Chen, Q. J.; Bentley, S. J.

    2017-12-01

    A vast area of river deltaic wetlands stretches across southern Louisiana coast. The wetlands are suffering from a high rate of land loss, which increasingly threats coastal community and energy infrastructure. A regional stratigraphic framework of the delta plain is now imperative to answer scientific questions (such as how the delta plain grows and decays?) and to provide information to coastal protection and restoration projects (such as marsh creation and construction of levees and floodwalls). Through years, subsurface investigations in Louisiana have been conducted by state and federal agencies (Louisiana Department of Natural Resources, United States Geological Survey, United States Army Corps of Engineers, etc.), research institutes (Louisiana Geological Survey, LSU Coastal Studies Institute, etc.), engineering firms, and oil-gas companies. This has resulted in the availability of various types of data, including geological, geotechnical, and geophysical data. However, it is challenging to integrate different types of data and construct three-dimensional stratigraphy models in regional scale. In this study, a set of geostatistical methods were used to tackle this problem. An ordinary kriging method was used to regionalize continuous data, such as grain size, water content, liquid limit, plasticity index, and cone penetrometer tests (CPTs). Indicator kriging and multiple indicator kriging methods were used to regionalize categorized data, such as soil classification. A compositional kriging method was used to regionalize compositional data, such as soil composition (fractions of sand, silt and clay). Stratigraphy models were constructed for three cases in the coastal zone: (1) Inner Harbor Navigation Canal (IHNC) area: soil classification and soil behavior type (SBT) stratigraphies were constructed using ordinary kriging; (2) Middle Barataria Bay area: a soil classification stratigraphy was constructed using multiple indicator kriging; (3) Lower Barataria

  6. Spectral stratigraphy

    Science.gov (United States)

    Lang, Harold R.

    1991-01-01

    A new approach to stratigraphic analysis is described which uses photogeologic and spectral interpretation of multispectral remote sensing data combined with topographic information to determine the attitude, thickness, and lithology of strata exposed at the surface. The new stratigraphic procedure is illustrated by examples in the literature. The published results demonstrate the potential of spectral stratigraphy for mapping strata, determining dip and strike, measuring and correlating stratigraphic sequences, defining lithofacies, mapping biofacies, and interpreting geological structures.

  7. Stratigraphy -- The Fall of Continuity.

    Science.gov (United States)

    Byers, Charles W.

    1982-01-01

    Reviews advances in stratigraphy as illustrated in the current geological literature, discussing discontinuity and how the recognition of discontinuity in the stratigraphic record is changing views of Superposition and Original Lateral Continuity. (Author/JN)

  8. Prediction of tectonic stresses and fracture networks with geomechanical reservoir models

    International Nuclear Information System (INIS)

    Henk, A.; Fischer, K.

    2014-09-01

    This project evaluates the potential of geomechanical Finite Element (FE) models for the prediction of in situ stresses and fracture networks in faulted reservoirs. Modeling focuses on spatial variations of the in situ stress distribution resulting from faults and contrasts in mechanical rock properties. In a first methodological part, a workflow is developed for building such geomechanical reservoir models and calibrating them to field data. In the second part, this workflow was applied successfully to an intensively faulted gas reservoir in the North German Basin. A truly field-scale geomechanical model covering more than 400km 2 was built and calibrated. It includes a mechanical stratigraphy as well as a network of 86 faults. The latter are implemented as distinct planes of weakness and allow the fault-specific evaluation of shear and normal stresses. A so-called static model describes the recent state of the reservoir and, thus, after calibration its results reveal the present-day in situ stress distribution. Further geodynamic modeling work considers the major stages in the tectonic history of the reservoir and provides insights in the paleo stress distribution. These results are compared to fracture data and hydraulic fault behavior observed today. The outcome of this project confirms the potential of geomechanical FE models for robust stress and fracture predictions. The workflow is generally applicable and can be used for modeling of any stress-sensitive reservoir.

  9. Prediction of tectonic stresses and fracture networks with geomechanical reservoir models

    Energy Technology Data Exchange (ETDEWEB)

    Henk, A.; Fischer, K. [TU Darmstadt (Germany). Inst. fuer Angewandte Geowissenschaften

    2014-09-15

    This project evaluates the potential of geomechanical Finite Element (FE) models for the prediction of in situ stresses and fracture networks in faulted reservoirs. Modeling focuses on spatial variations of the in situ stress distribution resulting from faults and contrasts in mechanical rock properties. In a first methodological part, a workflow is developed for building such geomechanical reservoir models and calibrating them to field data. In the second part, this workflow was applied successfully to an intensively faulted gas reservoir in the North German Basin. A truly field-scale geomechanical model covering more than 400km{sup 2} was built and calibrated. It includes a mechanical stratigraphy as well as a network of 86 faults. The latter are implemented as distinct planes of weakness and allow the fault-specific evaluation of shear and normal stresses. A so-called static model describes the recent state of the reservoir and, thus, after calibration its results reveal the present-day in situ stress distribution. Further geodynamic modeling work considers the major stages in the tectonic history of the reservoir and provides insights in the paleo stress distribution. These results are compared to fracture data and hydraulic fault behavior observed today. The outcome of this project confirms the potential of geomechanical FE models for robust stress and fracture predictions. The workflow is generally applicable and can be used for modeling of any stress-sensitive reservoir.

  10. Basalt stratigraphy - Pasco Basin

    International Nuclear Information System (INIS)

    Waters, A.C.; Myers, C.W.; Brown, D.J.; Ledgerwood, R.K.

    1979-10-01

    The geologic history of the Pasco Basin is sketched. Study of the stratigraphy of the area involved a number of techniques including major-element chemistry, paleomagnetic investigations, borehole logging, and other geophysical survey methods. Grande Ronde basalt accumulation in the Pasco Basin is described. An illustrative log response is shown. 1 figure

  11. Mechanics of wind ripple stratigraphy.

    Science.gov (United States)

    Forrest, S B; Haff, P K

    1992-03-06

    Stratigraphic patterns preserved under translating surface undulations or ripples in a depositional eolian environment are computed on a grain by grain basis using physically based cellular automata models. The spontaneous appearance, growth, and motion of the simulated ripples correspond in many respects to the behavior of natural ripples. The simulations show that climbing strata can be produced by impact alone; direct action of fluid shear is unnecessary. The model provides a means for evaluating the connection between mechanical processes occurring in the paleoenvironment during deposition and the resulting stratigraphy preserved in the geologic column: vertical compression of small laminae above a planar surface indicates nascent ripple growth; supercritical laminae are associated with unusually intense deposition episodes; and a plane erosion surface separating sets of well-developed laminae is consistent with continued migration of mature ripples during a hiatus in deposition.

  12. Application status and vistas of sequence stratigraphy to the exploration of sandstone-hosted uranium deposits

    International Nuclear Information System (INIS)

    Guo Qingyin; Chen Zuyi; Yu Jinshui; Han Shuqin

    2008-01-01

    Sequence stratigraphy is a newly developed subject based on seismostratigraphy, and has been widely applied in the exploration of hydrocarbon and other sedimentogenic mineral deposits and great achievements have been obtained. However, the application of sequence stratigraphy to the exploration of sandstone-hosted uranium deposits is just at the beginning. In this paper, some primary research achievements of sequence stratigraphy to the exploration of sandstone-hosted uranium deposits are summarized, and problems and their reasons of the application of sequence stratigraphy are discussed. Further more, according to characteristics of sandstone-hosted uranium deposits and the development of sequence stratigraphy, the application vistas of sequence stratigraphy to the exploration of sandstone-hosted uranium deposits are estimated. Finally, application directions are proposed, and some specific suggestions are given. (authors)

  13. Identifying Students' Conceptions of Basic Principles in Sequence Stratigraphy

    Science.gov (United States)

    Herrera, Juan S.; Riggs, Eric M.

    2013-01-01

    Sequence stratigraphy is a major research subject in the geosciences academia and the oil industry. However, the geoscience education literature addressing students' understanding of the basic concepts of sequence stratigraphy is relatively thin, and the topic has not been well explored. We conducted an assessment of 27 students' conceptions of…

  14. Sequence Stratigraphy of the Dakota Sandstone, Eastern San Juan Basin, New Mexico, and its Relationship to Reservoir Compartmentalization

    Energy Technology Data Exchange (ETDEWEB)

    Varney, Peter J.

    2002-04-23

    This research established the Dakota-outcrop sequence stratigraphy in part of the eastern San Juan Basin, New Mexico, and relates reservoir quality lithologies in depositional sequences to structure and reservoir compartmentalization in the South Lindrith Field area. The result was a predictive tool that will help guide further exploration and development.

  15. Research on sequence stratigraphy of Shuixigou group, jurassic in Yili basin

    International Nuclear Information System (INIS)

    Li Shengfu; Wang Guo; Wei Anxin; Qi Junrang; Zhang Zhongwang

    2006-01-01

    The Shuixigou Group, Jurassic in Yili basin is a set of coal-bearing clastic formation formed under basically consistent tectonic stress and sedimentary environment. In the middle of the Group (Sangonghe Formation) abrupt changes appear in the lithology and the colour of rocks, and an abrupt turning in logging curve appears correspondingly. Using the point view of sequence stratigraphy, the authors analyse the above phenomenon in aspects of the chrono-stratigraphy, the lithology, the sedimentary system tract, the sedimentary rhythum, as well as the interpretation of logging curves, and suggest that a unconformity surface (or a sequence boundary) exists in the middle of the Group, and try make a re-division of sequence stratigraphy for the Shuixigou Group. (authors)

  16. Sequence Stratigraphy of the Dakota Sandstone, Eastern San Juan Basin, New Mexico, and its Relationship to Reservoir Compartmentalization; FINAL

    International Nuclear Information System (INIS)

    Varney, Peter J.

    2002-01-01

    This research established the Dakota-outcrop sequence stratigraphy in part of the eastern San Juan Basin, New Mexico, and relates reservoir quality lithologies in depositional sequences to structure and reservoir compartmentalization in the South Lindrith Field area. The result was a predictive tool that will help guide further exploration and development

  17. Coastal barrier stratigraphy for Holocene high-resolution sea-level reconstruction.

    Science.gov (United States)

    Costas, Susana; Ferreira, Óscar; Plomaritis, Theocharis A; Leorri, Eduardo

    2016-12-08

    The uncertainties surrounding present and future sea-level rise have revived the debate around sea-level changes through the deglaciation and mid- to late Holocene, from which arises a need for high-quality reconstructions of regional sea level. Here, we explore the stratigraphy of a sandy barrier to identify the best sea-level indicators and provide a new sea-level reconstruction for the central Portuguese coast over the past 6.5 ka. The selected indicators represent morphological features extracted from coastal barrier stratigraphy, beach berm and dune-beach contact. These features were mapped from high-resolution ground penetrating radar images of the subsurface and transformed into sea-level indicators through comparison with modern analogs and a chronology based on optically stimulated luminescence ages. Our reconstructions document a continuous but slow sea-level rise after 6.5 ka with an accumulated change in elevation of about 2 m. In the context of SW Europe, our results show good agreement with previous studies, including the Tagus isostatic model, with minor discrepancies that demand further improvement of regional models. This work reinforces the potential of barrier indicators to accurately reconstruct high-resolution mid- to late Holocene sea-level changes through simple approaches.

  18. Stratigraphy and Evolution of Delta Channel Deposits, Jezero Crater, Mars

    Science.gov (United States)

    Goudge, T. A.; Mohrig, D.; Cardenas, B. T.; Hughes, C. M.; Fassett, C. I.

    2017-01-01

    The Jezero impact crater hosted an open-basin lake that was active during the valley network forming era on early Mars. This basin contains a well exposed delta deposit at the mouth of the western inlet valley. The fluvial stratigraphy of this deposit provides a record of the channels that built the delta over time. Here we describe observations of the stratigraphy of the channel deposits of the Jezero western delta to help reconstruct its evolution.

  19. Depositional sequence analysis and sedimentologic modeling for improved prediction of Pennsylvanian reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Watney, W.L.

    1994-12-01

    Reservoirs in the Lansing-Kansas City limestone result from complex interactions among paleotopography (deposition, concurrent structural deformation), sea level, and diagenesis. Analysis of reservoirs and surface and near-surface analogs has led to developing a {open_quotes}strandline grainstone model{close_quotes} in which relative sea-level stabilized during regressions, resulting in accumulation of multiple grainstone buildups along depositional strike. Resulting stratigraphy in these carbonate units are generally predictable correlating to inferred topographic elevation along the shelf. This model is a valuable predictive tool for (1) locating favorable reservoirs for exploration, and (2) anticipating internal properties of the reservoir for field development. Reservoirs in the Lansing-Kansas City limestones are developed in both oolitic and bioclastic grainstones, however, re-analysis of oomoldic reservoirs provides the greatest opportunity for developing bypassed oil. A new technique, the {open_quotes}Super{close_quotes} Pickett crossplot (formation resistivity vs. porosity) and its use in an integrated petrophysical characterization, has been developed to evaluate extractable oil remaining in these reservoirs. The manual method in combination with 3-D visualization and modeling can help to target production limiting heterogeneities in these complex reservoirs and moreover compute critical parameters for the field such as bulk volume water. Application of this technique indicates that from 6-9 million barrels of Lansing-Kansas City oil remain behind pipe in the Victory-Northeast Lemon Fields. Petroleum geologists are challenged to quantify inferred processes to aid in developing rationale geologically consistent models of sedimentation so that acceptable levels of prediction can be obtained.

  20. Stratigraphy of the Martian northern plains

    Science.gov (United States)

    Tanaka, K. L.

    1993-01-01

    The northern plains of Mars are roughly defined as the large continuous region of lowlands that lies below Martian datum, plus higher areas within the region that were built up by volcanism, sedimentation, tectonism, and impacts. These northern lowlands span about 50 x 10(exp 6) km(sup 2) or 35 percent of the planet's surface. The age and origin of the lowlands continue to be debated by proponents of impact and tectonic explanations. Geologic mapping and topical studies indicate that volcanic, fluvial, and eolian deposition have played major roles in the infilling of this vast depression. Periglacial, glacial, fluvial, eolian, tectonic, and impact processes have locally modified the surface. Because of the northern plains' complex history of sedimentation and modification, much of their stratigraphy was obscured. Thus the stratigraphy developed is necessarily vague and provisional: it is based on various clues from within the lowlands as well as from highland areas within and bordering the plains. The results are summarized.

  1. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Science.gov (United States)

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    -registration, depth correction, etc.) each geophysical profile was evaluated by matching the core data. Applying traditional geophysical techniques, the best profiles were inverted using the core data creating two-dimensional (2-D) stratigraphic regolith models for each transect, and evaluated using independent validation. Next, in a test of an alternative method borrowed from digital soil mapping, the best preprocessed geophysical profiles were co-registered and stratigraphic models for each property created using multivariate environmental correlation. After independent validation, the qualities of the latest models were compared to the traditionally derived 2-D inverted models. Finally, the best overall stratigraphic models were used in conjunction with local environmental data (e.g. geology, geochemistry, terrain, soils) to create conceptual regolith hillslope models for each transect highlighting important features and processes, e.g. morphology, hydropedology and weathering characteristics. Results are presented with recommendations regarding the use of geophysics in modelling regolith stratigraphy at fine scales.

  2. Bio-, Magneto- and event-stratigraphy across the K-T boundary

    Science.gov (United States)

    Preisinger, A.; Stradner, H.; Mauritsch, H. J.

    1988-01-01

    Determining the time and the time structure of rare events in geology can be accomplished by applying three different and independent stratigraphic methods: Biostratigraphy, magneto-stratigraphy and event-stratigraphy. The optimal time resolution of the two former methods is about 1000 years, while by means of event-stratigraphy a resolution of approximately one year can be achieved. For biostratigraphy across the Cretaceous-Tertiary (K-T) boundary micro- and nannofossils have been found best suited. The qualitative and quantitative analyses of minerals and trace elements across the K-T boundary show anomalies on a millimeter scale and permit conclusions regarding the time structure of the K-T event itself. The results of the analyses find a most consistent explanation by the assumption of an extraterrestrial impact. The main portion of the material rain from the atmosphere evidently was deposited within a short time. The long-time components consist of the finest portion of the material rain from the atmosphere and the transported and redeposited fall-out.

  3. Snowpack modeling in the context of radiance assimilation for snow water equivalent mapping

    Science.gov (United States)

    Durand, M. T.; Kim, R. S.; Li, D.; Dumont, M.; Margulis, S. A.

    2017-12-01

    Data assimilation is often touted as a means of overcoming deficiences in both snowpack modeling and snowpack remote sensing. Direct assimilation of microwave radiances, rather than assimilating microwave retrievals, has shown promise, in this context. This is especially the case for deep mountain snow, which is often assumed to be infeasible to measure with microwave measurements, due to saturation issues. We first demonstrate that the typical way of understanding saturation has often been misunderstood. We show that deep snow leads to a complex microwave signature, but not to saturation per se, because of snowpack stratigraphy. This explains why radiance assimilation requires detailed snowpack models that adequatley stratgigraphy to function accurately. We examine this with two case studies. First, we show how the CROCUS predictions of snowpack stratigraphy allows for assimilation of airborne passive microwave measurements over three 1km2 CLPX Intensive Study Areas. Snowpack modeling and particle filter analysis is performed at 120 m spatial resolution. When run without the benefit of radiance assimilation, CROCUS does not fully capture spatial patterns in the data (R2=0.44; RMSE=26 cm). Assimlilation of microwave radiances for a single flight recovers the spatial pattern of snow depth (R2=0.85; RMSE = 13 cm). This is despite the presence of deep snow; measured depths range from 150 to 325 cm. Adequate results are obtained even for partial forest cover, and bias in precipitation forcing. The results are severely degraded if a three-layer snow model is used, however. The importance of modeling snowpack stratigraphy is highlighted. Second, we compare this study to a recent analysis assimilating spaceborne radiances for a 511 km2 sub-watershed of the Kern River, in the Sierra Nevada. Here, the daily Level 2A AMSR-E footprints (88 km2) are assimilated into a model running at 90 m spatial resolution. The three-layer model is specially adapted to predict "effective

  4. Preliminary sequence stratigraphy and tectonic evolution of the Tokar Delta, (Southern Sudanese Red Sea)

    OpenAIRE

    Yagoub, Abbas Musa

    2007-01-01

    The study area comprises about 2500sq.Kms. From the seismic data acquired by the Oil Companies (Chevron 1975-76 Total 1980 and IPC 1992), thirty two seismic lines were selected Fig (2). Also synthetic seismograms of Suakin-1, Bashayer-1 A and Bashayer-2A wells are used. The stratigraphy and sedimentation of the Sudanese Red Sea can be placed into four major tectonic phases, Pre-rifting stratigraphy, Syn-Rift Pre-Salt Stratigarphy, Salt Phase and Syn-Rift Post Salt Stratigraphy. Basic conce...

  5. How to Make Eccentricity Cycles in Stratigraphy: the Role of Compaction

    Science.gov (United States)

    Liu, W.; Hinnov, L.; Wu, H.; Pas, D.

    2017-12-01

    Milankovitch cycles from astronomically driven climate variations have been demonstrated as preserved in cyclostratigraphy throughout geologic time. These stratigraphic cycles have been identified in many types of proxies, e.g., gamma ray, magnetic susceptibility, oxygen isotopes, carbonate content, grayscale, etc. However, the commonly prominent spectral power of orbital eccentricity cycles in stratigraphy is paradoxical to insolation, which is dominated by precession index power. How is the spectral power transferred from precession to eccentricity in stratigraphy? Nonlinear sedimentation and bioturbation have long been identified as players in this transference. Here, we propose that in the absence of bioturbation differential compaction can generate the transference. Using insolation time series, we trace the steps by which insolation is transformed into stratigraphy, and how differential compaction of lithology acts to transfer spectral power from precession to eccentricity. Differential compaction is applied to unique values of insolation, which is assumed to control the type of deposited sediment. High compaction is applied to muds, and progressively lower compaction is applied to silts and sands, or carbonate. Linear differential compaction promotes eccentricity spectral power, but nonlinear differential compaction elevates eccentricity spectral power to dominance and precession spectral power to near collapse as is often observed in real stratigraphy. Keywords: differential compaction, cyclostratigraphy, insolation, eccentricity

  6. Integrated stratigraphy of the late Campanian - Maastrichtian in the Danish Basin

    DEFF Research Database (Denmark)

    Boussaha, Myriam; Thibault, Nicolas Rudolph; Stemmerik, Lars

    2016-01-01

    New stratigraphic (calcareous nannofossils and carbon isotope) and sedimentological results from the upper Campanian – Maastrichtian of the Stevns-2 core, Denmark, allow for regional correlations and building of a solid age-model with the definition of 17 stratigraphic tie-points in the Danish Ba...... is presented for the Danish Basin, calibrated in age and correlated to high-resolution carbon isotope stratigraphy. Common, regional changes in sediment accumulation rates highlight trends that appear linked to climatic conditions....

  7. Sequence stratigraphy as a scientific enterprise: the evolution and persistence of conflicting paradigms

    Science.gov (United States)

    Miall, Andrew D.; Miall, Charlene E.

    2001-08-01

    In the 1970s, seismic stratigraphy represented a new paradigm in geological thought. The development of new techniques for analyzing seismic-reflection data constituted a "crisis," as conceptualized by T.S. Kuhn, and stimulated a revolution in stratigraphy. We analyze here a specific subset of the new ideas, that pertaining to the concept of global-eustasy and the global cycle chart published by Vail et al. [Vail, P.R., Mitchum, R.M., Jr., Todd, R.G., Widmier, J.M., Thompson, S., III, Sangree, J.B., Bubb, J.N., Hatlelid, W.G., 1977. Seismic stratigraphy and global changes of sea-level. In: Payton, C.E. (Ed.), Seismic Stratigraphy—Applications to Hydrocarbon Exploration, Am. Assoc. Pet. Geol. Mem. 26, pp. 49-212.] The global-eustasy model posed two challenges to the "normal science" of stratigraphy then underway: (1) that sequence stratigraphy, as exemplified by the global cycle chart, constitutes a superior standard of geologic time to that assembled from conventional chronostratigraphic evidence, and (2) that stratigraphic processes are dominated by the effects of eustasy, to the exclusion of other allogenic mechanisms, including tectonism. While many stratigraphers now doubt the universal validity of the model of global-eustasy, what we term the global-eustasy paradigm, a group of sequence researchers led by Vail still adheres to it, and the two conceptual approaches have evolved into two conflicting paradigms. Those who assert that there are multiple processes generating stratigraphic sequences (possibly including eustatic processes) are adherents of what we term the complexity paradigm. Followers of this paradigm argue that tests of the global cycle chart amount to little more than circular reasoning. A new body of work documenting the European sequence record was published in 1998 by de Graciansky et al. These workers largely follow the global-eustasy paradigm. Citation and textual analysis of this work indicates that they have not responded to any of the

  8. Digital Stratigraphy: Contextual Analysis of File System Traces in Forensic Science.

    Science.gov (United States)

    Casey, Eoghan

    2017-12-28

    This work introduces novel methods for conducting forensic analysis of file allocation traces, collectively called digital stratigraphy. These in-depth forensic analysis methods can provide insight into the origin, composition, distribution, and time frame of strata within storage media. Using case examples and empirical studies, this paper illuminates the successes, challenges, and limitations of digital stratigraphy. This study also shows how understanding file allocation methods can provide insight into concealment activities and how real-world computer usage can complicate digital stratigraphy. Furthermore, this work explains how forensic analysts have misinterpreted traces of normal file system behavior as indications of concealment activities. This work raises awareness of the value of taking the overall context into account when analyzing file system traces. This work calls for further research in this area and for forensic tools to provide necessary information for such contextual analysis, such as highlighting mass deletion, mass copying, and potential backdating. © 2017 American Academy of Forensic Sciences.

  9. Stratigraphy and dissolution of the Rustler Formation

    International Nuclear Information System (INIS)

    Bachman, G.O.

    1987-01-01

    This report describes the physical stratigraphy of the Rustler Formation, because the Waste Isolation Pilot Plant will be constructed in salt beds that underlie this formation. Described are subdivisions of the formations, the major karst features, and a proposed method for the formation of Nash Draw. 2 refs., 2 figs

  10. Sediment yield model implementation based on check dam infill stratigraphy in a semiarid Mediterranean catchment

    Directory of Open Access Journals (Sweden)

    G. Bussi

    2013-08-01

    Full Text Available Soil loss and sediment transport in Mediterranean areas are driven by complex non-linear processes which have been only partially understood. Distributed models can be very helpful tools for understanding the catchment-scale phenomena which lead to soil erosion and sediment transport. In this study, a modelling approach is proposed to reproduce and evaluate erosion and sediment yield processes in a Mediterranean catchment (Rambla del Poyo, Valencia, Spain. Due to the lack of sediment transport records for model calibration and validation, a detailed description of the alluvial stratigraphy infilling a check dam that drains a 12.9 km2 sub-catchment was used as indirect information of sediment yield data. These dam infill sediments showed evidences of at least 15 depositional events (floods over the time period 1990–2009. The TETIS model, a distributed conceptual hydrological and sediment model, was coupled to the Sediment Trap Efficiency for Small Ponds (STEP model for reproducing reservoir retention, and it was calibrated and validated using the sedimentation volume estimated for the depositional units associated with discrete runoff events. The results show relatively low net erosion rates compared to other Mediterranean catchments (0.136 Mg ha−1 yr−1, probably due to the extensive outcrops of limestone bedrock, thin soils and rather homogeneous vegetation cover. The simulated sediment production and transport rates offer model satisfactory results, further supported by in-site palaeohydrological evidences and spatial validation using additional check dams, showing the great potential of the presented data assimilation methodology for the quantitative analysis of sediment dynamics in ungauged Mediterranean basins.

  11. Stratigraphy of the type Maastrichtian – a synthesis

    NARCIS (Netherlands)

    Jagt, J.W.M.; Jagt-Yazykova, E.A.

    2012-01-01

    A synthesis of the stratigraphy of the Maastrichtian Stage in its extended type area, that is, southern Limburg (the Netherlands), and adjacent Belgian and German territories, is presented with a brief historical overview. Quarrying activities at the large quarry complex of ENCI-HeidelbergCement

  12. Reference stratigraphy and rock properties for the Waste Isolation Pilot Plant (WIPP) project

    International Nuclear Information System (INIS)

    Krieg, R.D.

    1984-01-01

    A stratigraphic description of the country rock near the working horizon at the Waste Isolation Pilot Plant (WIPP) is presented along with a set of mechanical and thermal properties of materials involved. Data from 41 cores and shafts are examined. The entire stratigraphic section is found to vary in elevation in a regular manner, but individual layer thicknesses and relative separation between layers are found to have no statistically significant variation over the one mile north to south extent of the working horizon. The stratigraphic description is taken to be relative to the local elevation of Anhydrite b. The material properties have been updated slightly from those in the July 1981 Reference Stratigraphy. This reference stratigraphy/properties document is intended primarily for use in thermal/structural analyses. This document supercedes the July 1981 stratigraphy/properties document. 31 references, 7 figures

  13. Geomorphic Transport Laws and the Statistics of Topography and Stratigraphy

    Science.gov (United States)

    Schumer, R.; Taloni, A.; Furbish, D. J.

    2016-12-01

    Geomorphic transport laws take the form of partial differential equations in which sediment motion is a deterministic function of slope. The addition of a noise term, representing unmeasurable, or subgrid scale autogenic forcing, reproduces scaling properties similar to those observed in topography, landforms, and stratigraphy. Here we describe a transport law that generalizes previous equations by permitting transport that is local or non-local in addition to different types of noise. More importantly, we use this transport law to link the character of sediment transport to the statistics of topography and stratigraphy. In particular, we link the origin of the Sadler effect to the evolution of the earth surface via a transport law.

  14. Stratigraphy, sedimentology and bulk organic geochemistry of black ...

    Indian Academy of Sciences (India)

    Stratigraphy, sedimentology and bulk organic geochemistry of black shales from the Proterozoic. Vindhyan Supergroup (central India). S Banerjee1,∗. , S Dutta. 2. , S Paikaray. 1 and U Mann. 2. 1. Department of Earth Sciences, Indian Institute of Technology Bombay, Powai, Mumbai 400 076, India. 2. Forschungszentrum ...

  15. Integrated stratigraphy and paleoenvironmental reconstruction for the Late Cretaceous Danish chalk based on the Stevns-2 core

    DEFF Research Database (Denmark)

    Boussaha, Myriam; Thibault, Nicolas Rudolph; Stemmerik, Lars

    An integrated stratigraphy of the Stevns-2 core located in eastern Denmark is hereby presented based on calcareous nannofossil biostratigraphy and carbon isotope stratigraphy. Carbon and oxygen isotope have been performed on 419 bulk samples. Calcareous nannofossil biostratigraphy has been applied...

  16. Seismic stratigraphy, some examples from Indian Ocean, interpretation of reflection data in interactive mode

    Digital Repository Service at National Institute of Oceanography (India)

    Krishna, K.S.

    the sedimentary layers on the basis of density-velocity contrasts. The surfaces may indicate unconformities or conformities. Eventually the method provides image of the sedimentary strata with internal surfaces and basaltic basement and occasionally its... underneath along a profile. Earth scientists will interpret these images in terms of stratigraphy. In simple terminology the seismic stratigraphy can be defined as a viewing of seismic reflection data with a geological approach. Total...

  17. Archaeological recording and chemical stratigraphy applied to contaminated land studies.

    Science.gov (United States)

    Photos-Jones, Effie; Hall, Allan J

    2011-11-15

    The method used by archaeologists for excavation and recording of the stratigraphic evidence, within trenches with or without archaeological remains, can potentially be useful to contaminated land consultants (CLCs). The implementation of archaeological practice in contaminated land assessments (CLAs) is not meant to be an exercise in data overkill; neither should it increase costs. Rather, we suggest, that if the excavation and recording, by a trained archaeologist, of the stratigraphy is followed by in-situ chemical characterisation then it is possible that much uncertainty associated with current field sampling practices, may be removed. This is because built into the chemical stratigraphy is the temporal and spatial relationship between different parts of the site reflecting the logic behind the distribution of contamination. An archaeological recording with chemical stratigraphy approach to sampling may possibly provide 'one method fits all' for potentially contaminated land sites (CLSs), just as archaeological characterisation of the stratigraphic record provides 'one method fits all' for all archaeological sites irrespective of period (prehistoric to modern) or type (rural, urban or industrial). We also suggest that there may be practical and financial benefits to be gained by pulling together expertise and resources stemming from different disciplines, not simply at the assessment phase, but also subsequent phases, in contaminated land improvement. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Biostratigraphy and sequence stratigraphy of the Sarvak Formation at Fahliyan Anticline (South of Yasuj)

    International Nuclear Information System (INIS)

    Ahmadi, A.; Vaziri-Moghaddam, A.; Sayrafian, A.; Taheri, A.

    2016-01-01

    In this study, bio stratigraphy, depositional environment and sequence stratigraphy of the Sarvak Formation at Fahliyan Anticline was studied. 8 species of benthic foraminifera (4 genera) and 8 species of planktonic foraminifera (11 genera) in the study area were recognized. 6 biozones have been recognized by distribution of the foraminifera, which in stratigraphic order are: Favusella washitensis Zone, Orbitolina-Alveolinids Assemblage Zone, Rudist debris Zone, Oligostegina flood Zone, Whiteinella archaeocretacea Zone and Helvetoglobotruncana helvetica Zone. On the basis of these, the age of Albian–Turonian was considered for the Sarvak Formation. Based on petrography and analysis of microfacies features 9 different microfacies types have been recognized, which can be grouped into 3 depositional environments: lagoon, shoal and open marine. The Sarvak Formation represents sedimentation on a carbonate ramp. Sequence stratigraphy analysis led to identification of 4 third-order sequences.

  19. Late Carboniferous to Late Permian carbon isotope stratigraphy

    DEFF Research Database (Denmark)

    Buggisch, Werner; Krainer, Karl; Schaffhauser, Maria

    2015-01-01

    An integrated study of the litho-, bio-, and isotope stratigraphy of carbonates in the Southern Alps was undertaken in order to better constrain δ13C variations during the Late Carboniferous to Late Permian. The presented high resolution isotope curves are based on 1299 δ13Ccarb and 396 δ13Corg...

  20. Reading and Abstracting Journal Articles in Sedimentology and Stratigraphy.

    Science.gov (United States)

    Conrad, Susan Howes

    1991-01-01

    An assignment centered on reading journal articles and writing abstracts is an effective way to improve student reading and writing skills in sedimentology and stratigraphy laboratories. Each student reads two articles and writes informative abstracts from the author's point of view. (PR)

  1. Quantifying small-scale spatio-temporal variability of snow stratigraphy in forests based on high-resolution snow penetrometry

    Science.gov (United States)

    Teich, M.; Hagenmuller, P.; Bebi, P.; Jenkins, M. J.; Giunta, A. D.; Schneebeli, M.

    2017-12-01

    Snow stratigraphy, the characteristic layering within a seasonal snowpack, has important implications for snow remote sensing, hydrology and avalanches. Forests modify snowpack properties through interception, wind speed reduction, and changes to the energy balance. The lack of snowpack observations in forests limits our ability to understand the evolution of snow stratigraphy and its spatio-temporal variability as a function of forest structure and to observe snowpack response to changes in forest cover. We examined the snowpack under canopies of a spruce forest in the central Rocky Mountains, USA, using the SnowMicroPen (SMP), a high resolution digital penetrometer. Weekly-repeated penetration force measurements were recorded along 10 m transects every 0.3 m in winter 2015 and bi-weekly along 20 m transects every 0.5 m in 2016 in three study plots beneath canopies of undisturbed, bark beetle-disturbed and harvested forest stands, and an open meadow. To disentangle information about layer hardness and depth variabilities, and to quantitatively compare the different SMP profiles, we applied a matching algorithm to our dataset, which combines several profiles by automatically adjusting their layer thicknesses. We linked spatial and temporal variabilities of penetration force and depth, and thus snow stratigraphy to forest and meteorological conditions. Throughout the season, snow stratigraphy was more heterogeneous in undisturbed but also beneath bark beetle-disturbed forests. In contrast, and despite remaining small diameter trees and woody debris, snow stratigraphy was rather homogenous at the harvested plot. As expected, layering at the non-forested plot varied only slightly over the small spatial extent sampled. At the open and harvested plots, persistent crusts and ice lenses were clearly present in the snowpack, while such hard layers barely occurred beneath undisturbed and disturbed canopies. Due to settling, hardness significantly increased with depth at

  2. Stratigraphy and erosional landforms of layered deposits in Valles Marineris, Mars

    Science.gov (United States)

    Komatsu, G.; Geissler, P. E.; Strom, R. G.; Singer, R. B.

    1993-01-01

    Satellite imagery is used to identify stratigraphy and erosional landforms of 13 layered deposits in the Valles Marineris region of Mars (occurring, specifically, in Gangis, Juventae, Hebes, Ophir-Candor, Melas, and Capri-Eos Chasmata), based on albedo and erosional styles. Results of stratigraphic correlations show that the stratigraphy of layered deposits in the Hebes, Juventae, and Gangis Chasmata are not well correlated, indicating that at least these chasmata had isolated depositional environments resulting in different stratigraphic sequences. On the other hand, the layered deposits in Ophir-Candor and Melas Chasmata appear to have been connected in each chasma. Some of the layered deposits display complexities which indicate changes in space and time in the dominant source materials.

  3. Spectral Profiler Probe for In Situ Snow Grain Size and Composition Stratigraphy

    Science.gov (United States)

    Berisford, Daniel F.; Molotch, Noah P.; Painter, Thomas

    2012-01-01

    An ultimate goal of the climate change, snow science, and hydrology communities is to measure snow water equivalent (SWE) from satellite measurements. Seasonal SWE is highly sensitive to climate change and provides fresh water for much of the world population. Snowmelt from mountainous regions represents the dominant water source for 60 million people in the United States and over one billion people globally. Determination of snow grain sizes comprising mountain snowpack is critical for predicting snow meltwater runoff, understanding physical properties and radiation balance, and providing necessary input for interpreting satellite measurements. Both microwave emission and radar backscatter from the snow are dominated by the snow grain size stratigraphy. As a result, retrieval algorithms for measuring snow water equivalents from orbiting satellites is largely hindered by inadequate knowledge of grain size.

  4. Sedimentary facies control on mechanical and fracture stratigraphy in turbidites

    NARCIS (Netherlands)

    Ogata, Kei; Storti, Fabrizio; Balsamo, Fabrizio; Tinterri, Roberto; Bedogni, Enrico; Fetter, Marcos; Gomes, Leonardo; Hatushika, Raphael

    2017-01-01

    Natural fracture networks exert a first-order control on the exploitation of resources such as aquifers, hydrocarbons, and geothermal reservoirs, and on environmental issues like underground gas storage and waste disposal. Fractures and the mechanical stratigraphy of layered sequences have been

  5. Upper Neogene stratigraphy and tectonics of Death Valley — a review

    Science.gov (United States)

    Knott, J. R.; Sarna-Wojcicki, A. M.; Machette, M. N.; Klinger, R. E.

    2005-12-01

    New tephrochronologic, soil-stratigraphic and radiometric-dating studies over the last 10 years have generated a robust numerical stratigraphy for Upper Neogene sedimentary deposits throughout Death Valley. Critical to this improved stratigraphy are correlated or radiometrically-dated tephra beds and tuffs that range in age from > 3.58 Ma to Mormon Point. This new geochronology also establishes maximum and minimum ages for Quaternary alluvial fans and Lake Manly deposits. Facies associated with the tephra beds show that ˜3.3 Ma the Furnace Creek basin was a northwest-southeast-trending lake flanked by alluvial fans. This paleolake extended from the Furnace Creek to Ubehebe. Based on the new stratigraphy, the Death Valley fault system can be divided into four main fault zones: the dextral, Quaternary-age Northern Death Valley fault zone; the dextral, pre-Quaternary Furnace Creek fault zone; the oblique-normal Black Mountains fault zone; and the dextral Southern Death Valley fault zone. Post - 3.3 Ma geometric, structural, and kinematic changes in the Black Mountains and Towne Pass fault zones led to the break up of Furnace Creek basin and uplift of the Copper Canyon and Nova basins. Internal kinematics of northern Death Valley are interpreted as either rotation of blocks or normal slip along the northeast-southwest-trending Towne Pass and Tin Mountain fault zones within the Eastern California shear zone.

  6. Persistent Tracers of Historic Ice Flow in Glacial Stratigraphy near Kamb Ice Stream, West Antarctica

    OpenAIRE

    Holschuh, Nicholas; Christianson, Knut; Conway, Howard; Jacobel, Robert W.; Welch, Brian C.

    2018-01-01

    Variations in properties controlling ice flow (e.g., topography, accumulation rate, basal friction) are recorded by structures in glacial stratigraphy. When anomalies that disturb the stratigraphy are fixed in space, the structures they produce advect away from the source, and can be used to trace flow pathways and reconstruct ice-flow patterns of the past. Here we provide an example of one of these persistent tracers: a prominent unconformity in the glacial layering that originates at Mt. Re...

  7. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  8. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  9. Application potential of sequence stratigraphy to prospecting for sandstone-type uranium deposit in continental depositional basins

    International Nuclear Information System (INIS)

    Li Shengxiang; Chen Zhaobo; Chen Zuyi; Xiang Weidong; Cai Yuqi

    2001-01-01

    Sequence stratigraphy has been widely used in hydrocarbon exploration and development, and great achievements have been achieved. However, its application to the prospecting for sandstone-type uranium deposits is just beginning. The metallogenic characteristics of sandstone-type uranium deposits and those of oil and gas are compared, and the relationship between sandstone-type uranium metallogenesis and the system tracts of sequence stratigraphy is studied. The authors propose that highest and system tracts are the main targets for prospecting interlayer oxidation zone type sandstone uranium deposits, and the incised valleys of low stand system tracts are favourable places for phreatic oxidation zone type sandstone uranium deposits, and transgressive system tracts are generally unfavorable to the formation of in-situ leachable sandstone-type uranium deposits. Finally, the authors look ahead the application potential of sequence stratigraphy to the prospecting for sandstone-type uranium deposits in continental depositional basins

  10. Sequence stratigraphy, seismic stratigraphy, and seismic structures of the lower intermediate confining unit and most of the Floridan aquifer system, Broward County, Florida

    Science.gov (United States)

    Cunningham, Kevin J.; Kluesner, Jared W.; Westcott, Richard L.; Robinson, Edward; Walker, Cameron; Khan, Shakira A.

    2017-12-08

    Deep well injection and disposal of treated wastewater into the highly transmissive saline Boulder Zone in the lower part of the Floridan aquifer system began in 1971. The zone of injection is a highly transmissive hydrogeologic unit, the Boulder Zone, in the lower part of the Floridan aquifer system. Since the 1990s, however, treated wastewater injection into the Boulder Zone in southeastern Florida has been detected at three treated wastewater injection utilities in the brackish upper part of the Floridan aquifer system designated for potential use as drinking water. At a time when usage of the Boulder Zone for treated wastewater disposal is increasing and the utilization of the upper part of the Floridan aquifer system for drinking water is intensifying, there is an urgency to understand the nature of cross-formational fluid flow and identify possible fluid pathways from the lower to upper zones of the Floridan aquifer system. To better understand the hydrogeologic controls on groundwater movement through the Floridan aquifer system in southeastern Florida, the U.S. Geological Survey and the Broward County Environmental Planning and Community Resilience Division conducted a 3.5-year cooperative study from July 2012 to December 2015. The study characterizes the sequence stratigraphy, seismic stratigraphy, and seismic structures of the lower part of the intermediate confining unit aquifer and most of the Floridan aquifer system.Data obtained to meet the study objective include 80 miles of high-resolution, two-dimensional (2D), seismic-reflection profiles acquired from canals in eastern Broward County. These profiles have been used to characterize the sequence stratigraphy, seismic stratigraphy, and seismic structures in a 425-square-mile study area. Horizon mapping of the seismic-reflection profiles and additional data collection from well logs and cores or cuttings from 44 wells were focused on construction of three-dimensional (3D) visualizations of eight

  11. Study on high-resolution sequence stratigraphy framework of uranium-hosting rock series in Qianjiadian sag

    International Nuclear Information System (INIS)

    Chen Fanghong; Zhang Mingyu

    2005-01-01

    The ore-hosting Yaojia Formation is composed of a set of braided stream medium-fine grained sediments. Guided by the basic theory of high-resolution sequence stratigraphy, and based on the core observation, the analysis of chemical composition of rocks, and data of natural potential logging and apparent resistivity logging, the authors have set up the high-resolution sequence stratigraphy framework of the ore-hosting Yaojia Formation, and discussed the relation of the stratigraphic structure of the middle cycle, as well as the paleotopography, the micro-facies to the formation of uranium deposit. (authors)

  12. Rootless tephra stratigraphy and emplacement processes

    Science.gov (United States)

    Hamilton, Christopher W.; Fitch, Erin P.; Fagents, Sarah A.; Thordarson, Thorvaldur

    2017-01-01

    Volcanic rootless cones are the products of thermohydraulic explosions involving rapid heat transfer from active lava (fuel) to external sources of water (coolant). Rootless eruptions are attributed to molten fuel-coolant interactions (MFCIs), but previous studies have not performed systematic investigations of rootless tephrostratigraphy and grain-size distributions to establish a baseline for evaluating relationships between environmental factors, MFCI efficiency, fragmentation, and patterns of tephra dispersal. This study examines a 13.55-m-thick vertical section through an archetypal rootless tephra sequence, which includes a rhythmic succession of 28 bed pairs. Each bed pair is interpreted to be the result of a discrete explosion cycle, with fine-grained basal material emplaced dominantly as tephra fall during an energetic opening phase, followed by the deposition of coarser-grained material mainly as ballistic ejecta during a weaker coda phase. Nine additional layers are interleaved throughout the stratigraphy and are interpreted to be dilute pyroclastic density current (PDC) deposits. Overall, the stratigraphy divides into four units: unit 1 contains the largest number of sediment-rich PDC deposits, units 2 and 3 are dominated by a rhythmic succession of bed pairs, and unit 4 includes welded layers. This pattern is consistent with a general decrease in MFCI efficiency due to the depletion of locally available coolant (i.e., groundwater or wet sediments). Changing conduit/vent geometries, mixing conditions, coolant and melt temperatures, and/or coolant impurities may also have affected MFCI efficiency, but the rhythmic nature of the bed pairs implies a periodic explosion process, which can be explained by temporary increases in the water-to-lava mass ratio during cycles of groundwater recharge.

  13. Prediction of calcite Cement Distribution in Shallow Marine Sandstone Reservoirs using Seismic Data

    Energy Technology Data Exchange (ETDEWEB)

    Bakke, N.E.

    1996-12-31

    This doctoral thesis investigates how calcite cemented layers can be detected by reflection seismic data and how seismic data combined with other methods can be used to predict lateral variation in calcite cementation in shallow marine sandstone reservoirs. Focus is on the geophysical aspects. Sequence stratigraphy and stochastic modelling aspects are only covered superficially. Possible sources of calcite in shallow marine sandstone are grouped into internal and external sources depending on their location relative to the presently cemented rock. Well data and seismic data from the Troll Field in the Norwegian North Sea have been analysed. Tuning amplitudes from stacks of thin calcite cemented layers are analysed. Tuning effects are constructive or destructive interference of pulses resulting from two or more closely spaced reflectors. The zero-offset tuning amplitude is shown to depend on calcite content in the stack and vertical stack size. The relationship is found by regression analysis based on extensive seismic modelling. The results are used to predict calcite distribution in a synthetic and a real data example. It is found that describing calcite cemented beds in shallow marine sandstone reservoirs is not a deterministic problem. Hence seismic inversion and sequence stratigraphy interpretation of well data have been combined in a probabilistic approach to produce models of calcite cemented barriers constrained by a maximum amount of information. It is concluded that seismic data can provide valuable information on distribution of calcite cemented beds in reservoirs where the background sandstones are relatively homogeneous. 63 refs., 78 figs., 10 tabs.

  14. Stratigraphy of the north polar layered deposits of Mars from high-resolution topography

    Science.gov (United States)

    Becerra, Patricio; Byrne, Shane; Sori, Michael M.; Sutton, Sarah; Herkenhoff, Kenneth E.

    2016-01-01

    The stratigraphy of the layered deposits of the polar regions of Mars is theorized to contain a record of recent climate change linked to insolation changes driven by variations in the planet's orbital and rotational parameters. In order to confidently link stratigraphic signals to insolation periodicities, a description of the stratigraphy is required based on quantities that directly relate to intrinsic properties of the layers. We use stereo Digital Terrain Models (DTMs) from the High Resolution Imaging Science Experiment (HiRISE) to derive a characteristic of North Polar Layered Deposits (NPLD) strata that can be correlated over large distances: the topographic protrusion of layers exposed in troughs, which is a proxy for the layers’ resistance to erosion. Using a combination of image analysis and a signal-matching algorithm to correlate continuous depth-protrusion signals taken from DTMs at different locations, we construct a stratigraphic column that describes the upper ~500 m of at least 7% of the area of the NPLD, and find accumulation rates that vary by factors of up to two. We find that, when coupled with observations of exposed layers in orbital images, the topographic expression of the strata is consistently continuous through large distances in the top 300 – 500 m of the NPLD, suggesting it is better related to intrinsic layer properties than brightness alone.

  15. Use of integrated analogue and numerical modelling to predict tridimensional fracture intensity in fault-related-folds.

    Science.gov (United States)

    Pizzati, Mattia; Cavozzi, Cristian; Magistroni, Corrado; Storti, Fabrizio

    2016-04-01

    Fracture density pattern predictions with low uncertainty is a fundamental issue for constraining fluid flow pathways in thrust-related anticlines in the frontal parts of thrust-and-fold belts and accretionary prisms, which can also provide plays for hydrocarbon exploration and development. Among the drivers that concur to determine the distribution of fractures in fold-and-thrust-belts, the complex kinematic pathways of folded structures play a key role. In areas with scarce and not reliable underground information, analogue modelling can provide effective support for developing and validating reliable hypotheses on structural architectures and their evolution. In this contribution, we propose a working method that combines analogue and numerical modelling. We deformed a sand-silicone multilayer to eventually produce a non-cylindrical thrust-related anticline at the wedge toe, which was our test geological structure at the reservoir scale. We cut 60 serial cross-sections through the central part of the deformed model to analyze faults and folds geometry using dedicated software (3D Move). The cross-sections were also used to reconstruct the 3D geometry of reference surfaces that compose the mechanical stratigraphy thanks to the use of the software GoCad. From the 3D model of the experimental anticline, by using 3D Move it was possible to calculate the cumulative stress and strain underwent by the deformed reference layers at the end of the deformation and also in incremental steps of fold growth. Based on these model outputs it was also possible to predict the orientation of three main fractures sets (joints and conjugate shear fractures) and their occurrence and density on model surfaces. The next step was the upscaling of the fracture network to the entire digital model volume, to create DFNs.

  16. Stratigraphy and Tectonics of Southeastern Serenitatis. Ph.D. Thesis

    Science.gov (United States)

    Maxwell, T. A.

    1976-01-01

    Results of investigations of returned Apollo 17 samples, and Apollo 15 and 17 photographs have provided a broad data base on which to interpret the southeastern Serenitatis region of the moon. Although many of the pre-Apollo 17 mission interpretations remain valid, detailed mapping of this region and correlation with earth-based and orbital remote-sensing data have resulted in a revision of the local mare stratigraphy.

  17. Forearc Basin Stratigraphy and Interactions With Accretionary Wedge Growth According to the Critical Taper Concept

    Science.gov (United States)

    Noda, Atsushi

    2018-03-01

    Forearc basins are important constituents of sediment traps along subduction zones; the basin stratigraphy records various events that the basin experienced. Although the linkage between basin formation and accretionary wedge growth suggests that mass balance exerts a key control on their evolution, the interaction processes between basin and basement remain poorly understood. This study performed 2-D numerical simulations in which basin stratigraphy was controlled by changes in sediment fluxes with accretionary wedge growth according to the critical taper concept. The resultant stratigraphy depended on the degree of filling (i.e., whether the basin was underfilled or overfilled) and the volume balance between the sediment flux supplied to the basin from the hinterland and the accommodation space in the basin. The trenchward progradation of deposition with onlapping contacts on the trenchside basin floor occurred during the underfilled phase, which formed a wedge-shaped sedimentary unit. In contrast, the landward migration of the depocenter, with the tilting of strata, was characteristic for the overfilled phase. Condensed sections marked stratigraphic boundaries, indicating when sediment supply or accommodation space was limited. The accommodation-limited intervals could have formed during the end of wedge uplift or when the taper angle decreased and possibly associated with the development of submarine canyons as conduits for bypassing sediments from the hinterland. Variations in sediment fluxes and their balance exerted a strong influence on the stratigraphic patterns in forearc basins. Assessing basin stratigraphy could be a key to evaluating how subduction zones evolve through their interactions with changing surface processes.

  18. The INTIMATE event stratigraphy of the last glacial period

    Science.gov (United States)

    Olander Rasmussen, Sune; Svensson, Anders

    2015-04-01

    The North Atlantic INTIMATE (INtegration of Ice-core, MArine and TErrestrial records) group has previously recommended an Event Stratigraphy approach for the synchronisation of records of the Last Termination using the Greenland ice core records as the regional stratotypes. A key element of these protocols has been the formal definition of numbered Greenland Stadials (GS) and Greenland Interstadials (GI) within the past glacial period as the Greenland expressions of the characteristic Dansgaard-Oeschger events that represent cold and warm phases of the North Atlantic region, respectively. Using a recent synchronization of the NGRIP, GRIP, and GISP2 ice cores that allows the parallel analysis of all three records on a common time scale, we here present an extension of the GS/GI stratigraphic template to the entire glacial period. In addition to the well-known sequence of Dansgaard-Oeschger events that were first defined and numbered in the ice core records more than two decades ago, a number of short-lived climatic oscillations have been identified in the three synchronized records. Some of these events have been observed in other studies, but we here propose a consistent scheme for discriminating and naming all the significant climatic events of the last glacial period that are represented in the Greenland ice cores. In addition to presenting the updated event stratigraphy, we make a series of recommendations on how to refer to these periods in a way that promotes unambiguous comparison and correlation between different proxy records, providing a more secure basis for investigating the dynamics and fundamental causes of these climatic perturbations. The work presented is a part of a newly published paper in an INTIMATE special issue of Quaternary Science Reviews: Rasmussen et al., 'A stratigraphic framework for abrupt climatic changes during the Last Glacial period based on three synchronized Greenland ice-core records: refining and extending the INTIMATE event

  19. Review of the upper Cenozoic stratigraphy overlying the Columbia River Basalt Group in western Idaho

    International Nuclear Information System (INIS)

    Strowd, W.B.

    1980-12-01

    This report is a synthesis of information currently available on the rocks that stratigraphically overlie the Columbia River Basalt Group in Idaho. The primary objective is to furnish a brief but comprehensive review of the literature available on upper Cenozoic rocks in western Idaho and to discuss their general stratigraphic relationships. This study also reviews the derivation of the present stratigraphy and notes weaknesses in our present understanding of the geology and the stratigraphy. This report was prepared in support of a study to evaluate the feasibility of nuclear waste storage in the Columbia River Basalt Group of the Pasco Basin, Washington

  20. Lower Devonian Brachiopods and Stratigraphy of North Palencia (Cantabrian Mountains, Spain)

    NARCIS (Netherlands)

    Binnekamp, J.G.

    1965-01-01

    A continuous sequence of Devonian sediments is exposed in the northern part of the province of Palencia (NW-Spain), on the southern slope of the Cantabrian Mountains. This study concerns the stratigraphy and paleontology of the Lower Devonian formations. At the base of the sequence a clastic

  1. Stratigraphy of the Harwell boreholes

    International Nuclear Information System (INIS)

    Gallois, R.W.; Worssam, B.C.

    1983-12-01

    Seven boreholes, five of them partially cored, were drilled at the Atomic Energy Research Establishment at Harwell as part of a general investigation to assess the feasibility of storing low- and intermediate-level radioactive waste in underground cavities. Two of the deeper boreholes were almost wholly cored to provide samples for hydrogeological, hydrochemical, mineralogical, geochemical, geotechnical, sedimentological and stratigraphical studies to enable variations in lithology and rock properties to be assessed, both vertically and laterally, and related to their regional geological setting. This report describes the lithologies, main faunal elements and stratigraphy of the Cretaceous, Jurassic, Triassic and Carboniferous sequences proved in the boreholes. More detailed stratigraphical accounts of the late Jurassic and Cretaceous sequences will be prepared when current studies of the faunal assemblages are complete. (author)

  2. Non-invasive NMR stratigraphy of a multi-layered artefact: an ancient detached mural painting.

    Science.gov (United States)

    Di Tullio, Valeria; Capitani, Donatella; Presciutti, Federica; Gentile, Gennaro; Brunetti, Brunetto Giovanni; Proietti, Noemi

    2013-10-01

    NMR stratigraphy was used to investigate in situ, non-destructively and non-invasively, the stratigraphy of hydrogen-rich layers of an ancient Nubian detached mural painting. Because of the detachment procedure, a complex multi-layered artefact was obtained, where, besides layers of the original mural painting, also the materials used during the procedure all became constitutive parts of the artefact. NMR measurements in situ enabled monitoring of the state of conservation of the artefact and planning of minimum representative sampling to validate results obtained in situ by solid-state NMR analysis of the samples. This analysis enabled chemical characterization of all organic materials. Use of reference compounds and prepared specimens assisted data interpretation.

  3. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  4. Late Quaternary stratigraphy and depositional history of the Long Island Sound basin

    Science.gov (United States)

    Lewis, Ralph S.; Stone, Janet R.

    1991-01-01

    The stratigraphy of Late Quaternary geologic units beneath Long Island Sound (LIS) is interpreted from 3,500 km of high-resolution, seismic-reflection profiles supplemented by vibracore data. Knowledge gained from onshore regional geologic studies and previous offshore investigations is also incorporated in these interpretations.

  5. [Usefulness of stratigraphy and computerized tomography in the initial staging of osteosarcoma of the extremities. Retrospective study of 217 cases].

    Science.gov (United States)

    Pelotti, P; Ciminari, R; Bacci, G; Avella, M; Briccoli, A

    1988-01-01

    The value of stratigraphy and pulmonary CT in the initial work-up of osteosarcoma of the extremities is assessed with reference to 217 patients encountered in the Bone Tumour Centre of Rizzoli Orthopaedic Institute in May 1983-May 1986. Stratigraphy revealed lung metastases not identified by standard radiography in 4 patients (1.8%), while CT revealed metastases not identified by either standard X-rays or stratigraphy in a further 6 cases (2.7%). It is concluded that the increase in the percentage of cures (about 30%) reported in the last 10 years in osteosarcoma cases given adjuvant chemotherapy cannot be explained by any difference in initial selection due to the use of these techniques that were not adopted in the historical series.

  6. Holocene stratigraphy and vegetation history in the Scoresby Sund area, East Greenland

    DEFF Research Database (Denmark)

    Funder, Svend Visby

    1978-01-01

    The Holocene stratigraphy in Scoresby Sund is based on climatic change as reflected by fluctuations in fjord and valley glaciers, immigration and extinction of marine molluscs, and the vegetation history recorded in pollen diagrams from five lakes. The histories are dated by C-14, and indirectly...

  7. Sediment transport processes and their resulting stratigraphy: informing science and society

    Science.gov (United States)

    Nittrouer, J. A.

    2013-12-01

    decades, there have been numerous scientific advances pertaining to the coupling of sediment transport and hydrodynamics. This research has produced new theory about how sediments accumulating in many unique environments shape the stratigraphic record. Recent studies have taken advantage of novel methods for acquiring observational data, which in turn have been used to advance numerical modeling schemes as well as experimental designs. As an example, consider fluvial deltas: here, hydrodynamics are constantly evolving over space and time. Patterns of sediment deposition and erosion (from dune to delta-lobe scales), resolved using high-resolution 3-D acoustic data, are used as input data to construct models that further show how channel dynamics (e.g., avulsions) and kinematics (e.g., lateral migration) evolve due to sediment and hydrodynamic coupling. This information is used to propose new theories of delta stratigraphy, which are then tested by examining ancient fluvial-delta systems. Finally, research efforts evaluating modern sediment-transport and depositional processes offer significant benefits to society. For example, fluvial deltas are heavily relied upon for societal welfare and yet are among the most dynamic landscapes on Earth's surface. Therefore, research examining the evolution of these landscapes not only advances basic science, but also doubles as an exercise in applied geomorphology.

  8. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  9. Correlation of the Eemian (interglacial) Stage and the deep-sea oxygen-isotope stratigraphy

    International Nuclear Information System (INIS)

    Mangerud, J.; Soenstegaard, E.; Sejrup, H.-P.

    1979-01-01

    A complete interglacial sequence in coastal marine sediments in western Norway is here correlated with the Eemian Stage by means of pollen stratigraphy, and with deep-sea cores by means of marine fossils. The Eemian is correlated with isotope stage 5e. (author)

  10. SAR Tomography for Terrestrial Snow Stratigraphy

    Science.gov (United States)

    Lei, Y.; Xu, X.; Baldi, C.; Bleser, J. W. D.; Yueh, S. H.; Elder, K.

    2017-12-01

    Traditional microwave observation of snowpack includes brightness temperature and backscatter. The single baseline configuration and loss of phase information hinders the retrieval of snow stratigraphy information from microwave observations. In this paper, we are investigating the tomography of polarimetric SAR to measure snow stratigraphy. In the past two years, we have developed a homodyne frequency modulated continuous wave radar (FMCW), operation at three earth exploration satellite bands within the X-band and Ku-band spectrums (centered at 9.6 GHz, 13.5 GHz, and 17.2 GHz) at Jet Propulsion Laboratory. The transceiver is mounted to a dual-axis planar scanner (60cm in each direction), which translates the antenna beams across the target area creating a tomographic baseline in two directions. Dual-antenna architecture was implemented to improve the isolation between the transmitter and receiver. This technique offers a 50 dB improvement in signal-to-noise ratio versus conventional single-antenna FMCW radar systems. With current setting, we could have around 30cm vertical resolution. The system was deployed on a ground based tower at the Fraser Experimental Forest (FEF) Headquarters, near Fraser, CO, USA (39.847°N, 105.912°W) from February 1 to April 30, 2017 and run continuously with some gaps for required optional supports. FEF is a 93-km2 research watershed in the heart of the central Rocky Mountains approximately 80-km West of Denver. During the campaign, in situ measurements of snow depth and other snowpack properties were performed every week for comparison with the remotely sensed data. A network of soil moisture sensors, time-lapse cameras, acoustic depth sensors, laser depth sensor and meteorological instruments was installed next to the site to collect in situ measurements of snow, weather, and soil conditions. Preliminary tomographic processing of ground based SAR data of snowpack at X- and Ku- band has revealed the presence of multiple layers within

  11. Sequence stratigraphy on an early wet Mars

    Science.gov (United States)

    Barker, Donald C.; Bhattacharya, Janok P.

    2018-02-01

    The evolution of Mars as a water-bearing body is of considerable interest for the understanding of its early history and evolution. The principles of terrestrial sequence stratigraphy provide a useful conceptual framework to hypothesize about the stratigraphic history of the planets northern plains. We present a model based on the hypothesized presence of an early ocean and the accumulation of lowland sediments eroded from highland terrain during the time of the valley networks and later outflow channels. Ancient, global environmental changes, induced by a progressively cooling climate would have led to a protracted loss of surface and near surface water from low-latitudes and eventual cold-trapping at higher latitudes - resulting in a unique and prolonged, perpetual forced regression within basins and lowland depositional environments. The Messinian Salinity Crisis (MSC) serves as a potential terrestrial analogue of the depositional and environmental consequences relating to the progressive removal of large standing bodies of water. We suggest that the evolution of similar conditions on Mars would have led to the emplacement of diagnostic sequences of deposits and regional scale unconformities, consistent with intermittent resurfacing of the northern plains and the progressive loss of an early ocean by the end of the Hesperian era.

  12. Towards the Redefinition of the Global Stratigraphy of Mercury: The Case of Intermediate Plains

    Science.gov (United States)

    Galluzzi, V.; Rothery, D. A.; Massironi, M.; Ferranti, L.; Mercury Mapping Team

    2018-05-01

    Observations based on an average mapping scale of 1:400k provide context for the redefinition of the global stratigraphy of Mercury. Results show that the Intermediate Plains unit should be re-introduced as an official mappable terrain.

  13. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  14. Carboniferous-Permian sedimentology and stratigraphy of the Nordfjorden High and Loppa Spur, Arctic Norway

    DEFF Research Database (Denmark)

    Ahlborn, Morten

    Abstract (shortened) Facies analysis of Late Paleozoic warm-water carbonates, were conducted in order to investigate the depositional evolution, cyclicity, internal architecture and sequence stratigraphy of the upper Gipsdalen Group carbonate platform on the Nordfjorden High in central Spitsberge...

  15. Facies Analysis and Sequence Stratigraphy of Missole Outcrops: N’Kapa Formation of the South-Eastern Edge of Douala Sub-Basin (Cameroon)

    OpenAIRE

    Kwetche , Paul; Ntamak-Nida , Marie Joseph; Nitcheu , Adrien Lamire Djomeni; Etame , Jacques; Owono , François Mvondo; Mbesse , Cecile Olive; Kissaaka , Joseph Bertrand Iboum; Ngon , Gilbert Ngon; Bourquin , Sylvie; Bilong , Paul

    2018-01-01

    International audience; Missole facies description and sequence stratigraphy analysis allow a new proposal of depositional environments of the Douala sub-basin eastern part. The sediments of Missole outcrops (N’kapa Formation) correspond to fluvial/tidal channel to shallow shelf deposits with in some place embayment deposits within a warm and semi-arid climate. Integrated sedimentologic, palynologic and mineralogical data document a comprehensive sequence stratigraphy of this part of the Doua...

  16. East Greenland Caledonides: stratigraphy, structure and geochronology: Lower Palaeozoic stratigraphy of the East Greenland Caledonides

    Directory of Open Access Journals (Sweden)

    Smith, M. Paul

    2004-12-01

    Full Text Available The Lower Palaeozoic stratigraphy of the East Greenland Caledonides, from the fjord region of North-East Greenland northwards to Kronprins Christian Land, is reviewed and a number of new lithostratigraphical units are proposed. The Slottet Formation (new is a Lower Cambrian quartzite unit, containing Skolithos burrows, that is present in the Målebjerg and Eleonore Sø tectonic windows, in the nunatak region of North-East Greenland. The unit is the source of common and often-reported glacial erratic boulders containing Skolithos that are distributed throughout the fjord region. The Målebjerg Formation (new overlies the Slottet Formation in the tectonic windows, and comprises limestones and dolostones of assumed Cambrian–Ordovician age. The Lower Palaeozoic succession of the fjord region of East Greenland (dominantly limestones and dolostones is formally placed in the Kong Oscar Fjord Group (new. Amendments are proposed for several existing units in the Kronprins Christian Land and Lambert Land areas, where they occur in autochthonous, parautochthonous and allochthonous settings.

  17. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  18. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  19. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  20. Mapping the productive sands of Lower Goru Formation by using seismic stratigraphy and rock physical studies in Sawan area, southern Pakistan: A case study

    KAUST Repository

    Munir, K.

    2011-02-24

    This study has been conducted in the Sawan gas field located in southern Pakistan. The aim of the study is to map the productive sands of the Lower Goru Formation of the study area. Rock physics parameters (bulk modulus, Poisson\\'s ratio) are analysed after a detailed sequence stratigraphic study. Sequence stratigraphy helps to comprehend the depositional model of sand and shale. Conformity has been established between seismic stratigraphy and the pattern achieved from rock physics investigations, which further helped in the identification of gas saturation zones for the reservoir. Rheological studies have been done to map the shear strain occurring in the area. This involves the contouring of shear strain values throughout the area under consideration. Contour maps give a picture of shear strain over the Lower Goru Formation. The identified and the productive zones are described by sands, high reflection strengths, rock physical anomalous areas and low shear strain.

  1. Genetic stratigraphy and palaeoenvironmental controls on coal distribution in the Witbank Basin Coalfield

    Energy Technology Data Exchange (ETDEWEB)

    LeBlanc, G

    1981-01-01

    Subsurface data from over 1200 boreholes in the Witbank basin coalfield provided information for determining the coalfield stratigraphy and the palaeoenvironmental controls on coal distribution. The inadequacies of existing coalfield lithostratigraphy are obviated by the erection of a genetic coalfield stratigraphy. A total of ten areally extensive defined marker surfaces resolve the sedimentary succession into nine defined fundamental genetically realted stratal increments, termed Genetic increments of strata(GIS's); these are grouped into four defined Genetic Sequences of Strata (GSS's), termed from the base upwards, the Witbank, Coalville, Middleburg and Van Dyks Drift GSS's which are respectively grouped to comprise a single genetic Generation of strata, termed the Ogies Generation. Coalfield genetic stratigraphy is summarized in graphic mode in a composite stratigraphic column. It is concluded from this study that with the northward retreat of the late-Palaeozoic Gondwana ice sheet a series of glacial valleys, partially filled with diamicite, dominated the landscape along the northern edge of the Karoo basin. Consequent outwash sediments constitute the Witbank GSS and accumulated as paraglacial, glaciofluvial and glaciolacustrine deltaic deposits, in the primeval Dwyka-Ecca Sea. Upon abandonment, shallow-rooted Tundra vegetation proliferated, and multi-channel outwash streams were stabilised, confined and enveloped by areally extensive accumulation of peat, that reached up to 24 m in thickness, and which constituted precursors to the 1 and 2 Seam coals which attain a combined thickness of up to 12,5 m. Variations i coal petrographic character and ash content are attributed to: overbank splaying within the proximity of recognisable syndepositional (in-seam) vegetation-stabilised channels; and to the progressively increased introduction of suspended sediment as a consequence of transgressive inundation prior to burial.

  2. Genetic stratigraphy of Coniacian deltaic deposits of the northwestern part of the Bohemian Cretaceous Basin

    Czech Academy of Sciences Publication Activity Database

    Nádaskay, R.; Uličný, David

    2014-01-01

    Roč. 165, č. 4 (2014), s. 547-575 ISSN 1860-1804 Institutional support: RVO:67985530 Keywords : genetic stratigraphy * well log * Bohemian Cretaceous Basin Subject RIV: DB - Geology ; Mineralogy Impact factor: 0.569, year: 2014

  3. Stratigraphy and dissolution of the Rustler Formation

    International Nuclear Information System (INIS)

    Bachman, G.O.

    1985-01-01

    The Rustler Formation is the uppermost evaporite-bearing unit in the Permian Ochoan series in southeastern New Mexico. It rests on the Salado Formation which includes the salt beds where the mined facility for the Waste Isolation Pilot Plant (WIPP) is being constructed. An understanding of the physical stratigraphy of the Rustler Formation is pertinent to studies of the WIPP site because some portions of the Rustler are water-bearing and may provide paths for circulating waters to come into contact with, and dissolve, evaporites within the Ochoan sequence. Knowledge of the processes, magnitude, and history of evaporite dissolution in the vicinity of the WIPP site is important to an evaluation of the integrity of the site. 2 refs., 2 figs

  4. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  5. Stratigraphy of Karaburun Peninsula

    Directory of Open Access Journals (Sweden)

    Burhan Erdoğan

    1990-06-01

    spanning from Early Triassic to Early Cretaceous, The Balıklıova formation of Campanian-Maastrichtian age rests unconformably, which consists of carbonate rocks and sandstones of flysch facies. The Karaburun belt with the stratigraphy outlined above is surrounded from all sides by a blocky unit which is called the Bomova melange. This blocky unit with highly sheared flysch matrix was formed in the Izmir-Ankara zone during a Maastrichtian-Danian interval. The boundary relations between the Karaburun belt and the Bomova melange indicate that the Karaburun platform was transported tectonically as a nap into the İzmir-Ankara zone during its opening. In this study, we have also found that the stratigraphy of the Karaburun belt is completely different from that of the so-called Sakarya continent, and they cannot be correlated with each other as suggested by the earlier workers.

  6. Review of the Upper Jurassic-Lower Cretaceous stratigraphy in Western Cameros basin, Northern Spain

    DEFF Research Database (Denmark)

    Vidal, Maria del Pilar Clemente

    2010-01-01

    The Upper Jurassic-Lower Cretaceous stratigraphy of the Cameros basin has been reviewed. In Western Cameros the stratigraphic sections are condensed but they have a parallel development with the basin depocentre and the same groups have been identified. The Tera Group consists of two formations: ...

  7. The Andatza coarse-grained turbidite system (westernmost Pyrenees: Stratigraphy, sedimentology and structural control

    Directory of Open Access Journals (Sweden)

    A. Bodego

    2017-06-01

    Full Text Available This is a field-based work that describes the stratigraphy and sedimentology of the Andatza Conglomerate Formation. Based on facies analysis three facies associations of a coarse-grained turbidite system and the related slope have been identified: (1 an inner fan of a turbidite system (or canyon and (2 a low- and (3 a high-gradient muddy slope respectively. The spatial distribution of the facies associations and the palaeocurrent analysis allow to interpret a depositional model for the Andatza Conglomerates consisting of an L-shaped, coarse-grained turbidite system, whose morphology was structurally controlled by synsedimentary basement-involved normal faults. The coarse-grained character of the turbidite system indicates the proximity of the source area, with the presence of a narrow shelf that fed the turbidite canyon from the north.

  8. Stratigraphy and Observations of Nepthys Mons Quadrangle (V54), Venus

    Science.gov (United States)

    Bridges, N. T.

    2001-01-01

    Initial mapping has begun in Venus' Nepthys Mons Quadrangle (V54, 300-330 deg. E, 25-50 deg. S). Major research areas addressed are how the styles of volcanism and tectonism have changed with time, the evolution of shield volcanoes, the evolution of coronae, the characteristics of plains volcanism, and what these observations tell us about the general geologic history of Venus. Reported here is a preliminary general stratigraphy and several intriguing findings. Additional information is contained in the original extended abstract.

  9. Effects of host rock stratigraphy on the formation of ring-faults and the initiation of collapse calderas

    International Nuclear Information System (INIS)

    Kinvig, H S; Geyer, A; Gottsmann, J

    2008-01-01

    Most collapse calderas can be attributed to subsidence of the magma chamber roof along bounding sub-vertical normal faults (ring-faults) after a decompression of the magma chamber, following eruption. Here, we present new numerical models that use a Finite Element Method to investigate the effects of variable crustal stratigraphy (lithology/thickness/order of strata) above a magma chamber, on local stress field distribution and how these in turn compare with existing criteria for ring-fault initiation. Results indicate that the occurrence and relative distribution of mechanically different lithologies may be influential in generating or inhibiting caldera collapse.

  10. Effects of host rock stratigraphy on the formation of ring-faults and the initiation of collapse calderas

    Energy Technology Data Exchange (ETDEWEB)

    Kinvig, H S; Geyer, A; Gottsmann, J [Department of Earth Sciences, University of Bristol, Wills Memorial Building, Queen' s Road, BS8 1RJ, Bristol (United Kingdom)

    2008-10-01

    Most collapse calderas can be attributed to subsidence of the magma chamber roof along bounding sub-vertical normal faults (ring-faults) after a decompression of the magma chamber, following eruption. Here, we present new numerical models that use a Finite Element Method to investigate the effects of variable crustal stratigraphy (lithology/thickness/order of strata) above a magma chamber, on local stress field distribution and how these in turn compare with existing criteria for ring-fault initiation. Results indicate that the occurrence and relative distribution of mechanically different lithologies may be influential in generating or inhibiting caldera collapse.

  11. The sedimentology and stratigraphy of the Early Tertiary, Cusiana field, Colombia

    International Nuclear Information System (INIS)

    Pulham, Andrew J.

    1995-01-01

    The cusiana field (BP, Ecopetrol, Total and Triton is located in the llanos foothills of Eastern Colombia. There are three key reservoirs in the Cusiana field; the late Eocene to Early Oligocene Mirador Formation, the late Paleocene Barco Formation and the Santonian to Late Campanian Guadalupe Formation. The Mirador Formation contains over 50% of the original oil in place. Production for 1995 is planned at 130.000 barrels of oil per day (bopd; daily average), principally for the Mirador Formation with some production from the Guadalupe Formation. The mirador formation has therefore been the focus of a detailed reservoir description study aimed at understanding reservoir performance and putting a foundation in place for long-term reservoir management. The Mirador Formation comprises sandy (>60%), high frequency sequences dominated by the deposits of incised valleys. This paper describes the sedimentology and stratigraphy of the Mirador and the methodology chosen to construct a reservoir model fit for reservoir simulation

  12. Towards a mechanistic understanding of the linkages between PETM climate modulation and stratigraphy, as discerned from the Piceance Basin, CO, USA

    Science.gov (United States)

    Barefoot, E. A.; Nittrouer, J. A.; Foreman, B.; Moodie, A. J.; Dickens, G. R.

    2017-12-01

    The Paleocene-Eocene Thermal Maximum (PETM) was a period of rapid climatic change when global temperatures increased by 5-8˚C in as little as 5 ka. It has been hypothesized that by drastically enhancing the hydrologic cycle, this temperature change significantly perturbed landscape dynamics over the ensuing 200 ka. Much of the evidence documenting hydrological variability derives from studies of the stratigraphic record, which is interpreted to encode a system-clearing event in fluvial systems worldwide during and after the PETM. For example, in the Piceance Basin of Western Colorado, it is hypothesized that intensification of monsoons due to PETM warming caused an increase in sediment flux to the basin. The resulting stratigraphy records a modulation of the sedimentation rate, where the PETM interval is represented by a laterally extensive sheet sand positioned between units dominated by floodplain muds. The temporal interval, the sediment provenance history, as well as the tectonic history of the PETM in the Piceance Basin are all well-constrained, leaving climate as the most significant allogenic forcing in the Piceance Basin during the PETM. However, the precise nature of landscape change that link climate forcing by the PETM to modulation of the sedimentation rate in this basin remains to be demonstrated. Here, we present a simple stratigraphic numerical model coupled with a conceptual source-to-sink framework to test the impact of a suite of changing upstream boundary conditions on the fluvial system. In the model, climate-related variables force changes in flow characteristics such as sediment transport, slope, and velocity, which determine the resultant floodplain stratigraphy. The model is based on mathematical relations that link bankfull geometry and water discharge, impacting the lateral migration rate of the channel, sediment transport rate, and avulsion frequency, thereby producing a cross-section of basin stratigraphy. In this way, we simulate a

  13. A consistent magnetic polarity stratigraphy of Plio-Pleistocene fluvial sediments from the Heidelberg Basin (Germany)

    Science.gov (United States)

    Scheidt, Stephanie; Hambach, Ulrich; Rolf, Christian

    2014-05-01

    Deep drillings in the Heidelberg Basins provide access to one of the thickest and most complete successions of Quaternary and Upper Pliocene continental sediments in Central-Europe [1]. In absence of any comprehensive chronostratigraphic model, these sediments are so far classified by lithological and hydrogeological criteria. Therefore the age of this sequence is still controversially discussed ([1], [2]). In spite of the fact that fluvial sediments are a fundamental challenge for the application of magnetic polarity stratigraphy we performed a thorough study on four drilling cores (from Heidelberg, Ludwigshafen and nearby Viernheim). Here, we present the results from the analyses of these cores, which yield to a consistent chronostratigraphic framework. The components of natural remanent magnetisation (NRM) were separated by alternating field and thermal demagnetisation techniques and the characteristic remanent magnetisations (ChRM) were isolated by principle component analysis [3]. Due to the coring technique solely inclination data of the ChRM is used for the determination of the magnetic polarity stratigraphy. Rock magnetic proxies were applied to identify the carriers of the remanent magnetisation. The investigations prove the NRM as a stable, largely primary magnetisation acquired shortly after deposition (PDRM). The Matuyama-Gauss boundary is clearly defined by a polarity change in each core, as suggested in previous work [4]. These findings are in good agreement with the biostratigraphic definition of the base of the Quaternary ([5], [6], [7]). The Brunhes-Matuyama boundary could be identified in core Heidelberg UniNord 1 and 2 only. Consequently, the position of the Jaramillo and Olduvai subchron can be inferred from the lithostratigraphy and the development of fluvial facies architecture in the Rhine system. The continuation of the magnetic polarity stratigraphy into the Gilbert chron (Upper Pliocene) allows alternative correlation schemes for the cores

  14. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  15. Fluvial-deltaic sedimentation and stratigraphy of the ferron sandstone

    Science.gov (United States)

    Anderson, P.B.; Chidsey, T.C.; Ryer, T.A.

    1997-01-01

    East-central Utah has world-class outcrops of dominantly fluvial-deltaic Turonian to Coniacian aged strata deposited in the Cretaceous foreland basin. The Ferron Sandstone Member of the Mancos Shale records the influences of both tidal and wave energy on fluvial-dominated deltas on the western margin of the Cretaceous western interior seaway. Revisions of the stratigraphy are proposed for the Ferron Sandstone. Facies representing a variety of environments of deposition are well exposed, including delta-front, strandline, marginal marine, and coastal-plain. Some of these facies are described in detail for use in petroleum reservoir characterization and include permeability structure.

  16. Seismic stratigraphy and depositional history of the Büyükçekmece ...

    Indian Academy of Sciences (India)

    Denizhan Vardar

    2018-02-14

    Feb 14, 2018 ... The Marmara Sea is an intra-continental marine basin connected ... The northern shelf areas of the Marmara Sea is ... stratigraphy of this margin have been focused on .... due to shallow gas accumulations. In the ..... Petrol. Geol. Bull. 76 1687–1709. Smith A D, Taymaz T, Oktay F, Yüce H, Alpar B, Basaran.

  17. Insight into collision zone dynamics from topography: numerical modelling results and observations

    Directory of Open Access Journals (Sweden)

    A. D. Bottrill

    2012-11-01

    Full Text Available Dynamic models of subduction and continental collision are used to predict dynamic topography changes on the overriding plate. The modelling results show a distinct evolution of topography on the overriding plate, during subduction, continental collision and slab break-off. A prominent topographic feature is a temporary (few Myrs basin on the overriding plate after initial collision. This "collisional mantle dynamic basin" (CMDB is caused by slab steepening drawing, material away from the base of the overriding plate. Also, during this initial collision phase, surface uplift is predicted on the overriding plate between the suture zone and the CMDB, due to the subduction of buoyant continental material and its isostatic compensation. After slab detachment, redistribution of stresses and underplating of the overriding plate cause the uplift to spread further into the overriding plate. This topographic evolution fits the stratigraphy found on the overriding plate of the Arabia-Eurasia collision zone in Iran and south east Turkey. The sedimentary record from the overriding plate contains Upper Oligocene-Lower Miocene marine carbonates deposited between terrestrial clastic sedimentary rocks, in units such as the Qom Formation and its lateral equivalents. This stratigraphy shows that during the Late Oligocene–Early Miocene the surface of the overriding plate sank below sea level before rising back above sea level, without major compressional deformation recorded in the same area. Our modelled topography changes fit well with this observed uplift and subsidence.

  18. Evaluation of the structure and stratigraphy over Richton Dome, Mississippi

    International Nuclear Information System (INIS)

    Werner, M.L.

    1986-05-01

    The structure and stratigraphy over Richton Salt Dome, Mississippi, have been evaluated from 70 borings that were completed to various depths above the dome. Seven lithologic units have been identified and tentatively correlated with the regional Tertiary stratigraphy. Structure-contour and thickness maps of the units show the effects of dome growth from Eocene through early Pliocene time. Growth of the salt stock from late Oligocene through early Pliocene is estimated to have averaged 0.6 to 2.6 centimeters (0.2 to 1.1 inches) per 1000 years. No dome growth has occurred since the early Pliocene. The late Oligocene to early Pliocene strata over and adjacent to the dome reflect arching over the entire salt stock; some additional arching over individual centers may represent pre-Quaternary differential movement in the salt stock. The lithology and structure of the caprock at the Richton Salt Dome indicate that the caprock probably was completely formed by late Oligocene. In late Oligocene, the caprock was fractured by arching and altered by gypsum veining. Since late Oligocene, there are no indications of significant hydrologic connections through the caprock - that is, there are no indications of dissolution collapse or further anhydrite caprock accumulation. This structural and stratigraphic analysis provides insights on dome growth history, dome geometry, and neardome hydrostratigraphy that will aid in planning site characterization field activities, including an exploratory shaft, and in the conceptual design of a high-level waste (HLW) repository

  19. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  20. Origin, sequence stratigraphy and depositional environment of an Upper Ordovician (Hirnantian) deglacial black shale, Jordan-Discussion

    Czech Academy of Sciences Publication Activity Database

    Lüning, S.; Loydell, D. K.; Štorch, Petr; Shahin, Y.; Craig, J.

    2006-01-01

    Roč. 230, 3-4 (2006), s. 352-355 ISSN 0031-0182 Institutional research plan: CEZ:AV0Z30130516 Keywords : Silurian * black shale * sequence stratigraphy Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.822, year: 2006

  1. Stratigraphy and correlation of the Manitou Falls formation, the Athabasca Group

    International Nuclear Information System (INIS)

    Iida, Yoshimasa; Ikeda, Koki; Tsuruta, Tadahiko; Yamada, Yasuo; Ito, Hiroaki; Goto, Junichi

    1996-01-01

    Manitou Falls formation is the thick strata of Proterozoic era that spread widely in the uranium deposit zone in Northern Saskatchewan, Canada (Athabasca basin). In order to study in detail underground geological structure by trial boring, it is necessary to distinguish and compare strata by dividing them into the units as small as possible. The Manitou Falls formation is composed of only sandstone and conglomerate, and it does not have simply identified, continuous strata like tuff and coal layers. In the Christie Lake B district located in Eastern Athabasca basin, the division of strata was carried out by utilizing the data of the changes in the volume ratio of conglomerate layer and maximum pebble size and natural radioactivity logging and based on the careful comparison among trial bores. As the result, this formation was divided into the units of 9 strata. As one of the methods of identifying each unit, the comparison of the power spectra of natural radioactivity logging data was attempted. By this means, it was found that the features in the periodicity of stratum accumulation are useful for identifying strata. The outline of the Manitou Falls formation, the location of the investigated district, the basic data for stratigraphy division, the stratigraphy division in Christie Lake B district and the results are reported. (K.I.)

  2. Geological Interpretation of the Structure and Stratigraphy of the A/M Area, Savannah River Site, South Carolina

    Energy Technology Data Exchange (ETDEWEB)

    Wyatt, D. [Westinghouse Savannah River Company, AIKEN, SC (United States); Aadland, R.K.; Cumbest, R.J.; Stephenson, D.E.; Syms, F.H.

    1997-12-01

    The geological interpretation of the structure and stratigraphy of the A/M Area was undertaken in order to evaluate the effects of deeper Cretaceous aged geological strata and structure on shallower Tertiary horizons.

  3. Geological Interpretation of the Structure and Stratigraphy of the A/M Area, Savannah River Site, South Carolina

    International Nuclear Information System (INIS)

    Wyatt, D.; Aadland, R.K.; Cumbest, R.J.; Stephenson, D.E.; Syms, F.H.

    1997-12-01

    The geological interpretation of the structure and stratigraphy of the A/M Area was undertaken in order to evaluate the effects of deeper Cretaceous aged geological strata and structure on shallower Tertiary horizons

  4. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  5. Upper Campanian–Maastrichtian nannofossil biostratigraphy and high-resolution carbon-isotope stratigraphy of the Danish Basin

    DEFF Research Database (Denmark)

    Thibault, Nicolas Rudolph; Harlou, Rikke; Schovsbo, Niels

    2012-01-01

    High-resolution carbon isotope stratigraphy of the upper Campanian – Maastrichtian is recorded in the Boreal Realm from a total of 1968 bulk chalk samples of the Stevns-1 core, eastern Denmark. Isotopic trends are calibrated by calcareous nannofossil bio-events and are correlated with a lower...

  6. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  7. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Punctuated Sediment Discharge during Early Pliocene Birth of the Colorado River: Evidence from Regional Stratigraphy, Sedimentology, and Paleontology

    Science.gov (United States)

    Dorsey, Rebecca J.; O'Connell, Brennan; McDougall, Kristin; Homan, Mindy B.

    2018-01-01

    The Colorado River in the southwestern U.S. provides an excellent natural laboratory for studying the origins of a continent-scale river system, because deposits that formed prior to and during river initiation are well exposed in the lower river valley and nearby basinal sink. This paper presents a synthesis of regional stratigraphy, sedimentology, and micropaleontology from the southern Bouse Formation and similar-age deposits in the western Salton Trough, which we use to interpret processes that controlled the birth and early evolution of the Colorado River. The southern Bouse Formation is divided into three laterally persistent members: basal carbonate, siliciclastic, and upper bioclastic members. Basal carbonate accumulated in a tide-dominated marine embayment during a rise of relative sea level between 6.3 and 5.4 Ma, prior to arrival of the Colorado River. The transition to green claystone records initial rapid influx of river water and its distal clay wash load into the subtidal marine embayment at 5.4-5.3 Ma. This was followed by rapid southward progradation of the Colorado River delta, establishment of the earliest through-flowing river, and deposition of river-derived turbidites in the western Salton Trough (Wind Caves paleocanyon) between 5.3 and 5.1 Ma. Early delta progradation was followed by regional shut-down of river sand output between 5.1 and 4.8 Ma that resulted in deposition of marine clay in the Salton Trough, retreat of the delta, and re-flooding of the lower river valley by shallow marine water that deposited the Bouse upper bioclastic member. Resumption of sediment discharge at 4.8 Ma drove massive progradation of fluvial-deltaic deposits back down the river valley into the northern Gulf and Salton Trough. These results provide evidence for a discontinuous, start-stop-start history of sand output during initiation of the Colorado River that is not predicted by existing models for this system. The underlying controls on punctuated sediment

  9. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  10. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  11. Stratigraphy of the late Cenozoic sediments beneath the 216-A Crib Facilities

    International Nuclear Information System (INIS)

    Fecht, K.R.; Last, G.V.; Marratt, M.C.

    1979-02-01

    The stratigraphy of the late Cenozoic sediments beneath the 216-A Crib Facilities is presented as lithofacies cross sections and is based on textural variations of the sedimentary sequence lying above the basalt bedrock. The primary source of data in this study is geologic information obtained from well drilling operations and geophysical logging. Stratigraphic interpretations are based primarily on textural analysis and visual examination of sediment samples and supplemented by drillers logs and geophysical logs

  12. Pyloniid stratigraphy - A new tool to date tropical radiolarian ooze from the central tropical Indian Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Gupta, S.M.

    stratigraphy. The multi-taper (MTM) spectral analysis of Pyloniids for the last 485 ka in a core (AAS 2/3) reveals significant climatic cycles at the eccentricity (100 ka), tilt (41 ka) and precession (23 ka) bands. Cross-spectral analyses suggest coherent...

  13. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  14. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  15. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  16. High-resolution sequence stratigraphy and its controls on uranium mineralization. Taking middle Jurassic Qingtujing formation in Chaoshui basin as an example

    International Nuclear Information System (INIS)

    Yao Chunling; Guo Qingyin; Chen Zuyi; Liu Hongxu

    2006-01-01

    By applying the high-resolution sequence stratigraphy, the different sub-divisions of base-level cycles of Qingtujing Formation, Middle Jurassic in Chaoshui Basin are analyzed in detail. 23 short, 3 middle and 1 long base-level cycles are recognized. On the above basis, the corresponding frameworks of high-resolution sequence stratigraphy have been established in northern and southern sub-basins respectively, and the detailed sedimentary facies of MSC 1-3 and the special distribution of Qintujing Formation are discussed. It is pointed out that MSC 2 is the most favorable layer for the localization of sandstone-type uranium deposits. (authors)

  17. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  18. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  19. Validating predictions made by a thermo-mechanical model of melt segregation in sub-volcanic systems

    Science.gov (United States)

    Roele, Katarina; Jackson, Matthew; Morgan, Joanna

    2014-05-01

    A quantitative understanding of the spatial and temporal evolution of melt distribution in the crust is crucial in providing insights into the development of sub-volcanic crustal stratigraphy and composition. This work aims to relate numerical models that describe the base of volcanic systems with geophysical observations. Recent modelling has shown that the repetitive emplacement of mantle-derived basaltic sills, at the base of the lower crust, acts as a heat source for anatectic melt generation, buoyancy-driven melt segregation and mobilisation. These processes form the lowermost architecture of complex sub-volcanic networks as upward migrating melt produces high melt fraction layers. These 'porosity waves' are separated by zones with high compaction rates and have distinctive polybaric chemical signatures that suggest mixed crust and mantle origins. A thermo-mechanical model produced by Solano et al in 2012 has been used to predict the temperatures and melt fractions of successive high porosity layers within the crust. This model was used as it accounts for the dynamic evolution of melt during segregation and migration through the crust; a significant process that has been neglected in previous models. The results were used to input starting compositions for each of the layers into the rhyolite-MELTS thermodynamic simulation. MELTS then determined the approximate bulk composition of the layers once they had cooled and solidified. The mean seismic wave velocities of the polymineralic layers were then calculated using the relevant Voight-Reuss-Hill mixture rules, whilst accounting for the pressure and temperature dependence of seismic wave velocity. The predicted results were then compared with real examples of reflectivity for areas including the UK, where lower crustal layering is observed. A comparison between the impedance contrasts at compositional boundaries is presented as it confirms the extent to which modelling is able to make predictions that are

  20. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  1. Paleozoic stratigraphy of two areas in southwestern Indiana

    International Nuclear Information System (INIS)

    Droste, J.B.

    1976-09-01

    Two areas recommended for evaluation as solid waste disposal sites lie along the strike of Paleozoic rocks in southwestern Indiana. Thin Pennsylvanian rocks and rocks of the upper Mississippian are at the bedrock surface in maturely dissected uplands in both areas. The gross subsurface stratigraphy beneath both areas is the same, but facies and thickness variation in some of the subsurface Paleozoic units provide for some minor differences between the areas. Thick middle Mississippi carbonates grade downward into clastics of lower Mississippian (Borden Group) and upper Devonian (New Albany Shale) rocks. Middle Devonian and Silurian rocks are dominated by carbonate lithologies. Upper Ordovician (Maquoketa Group) overly carbonates of middle Ordovician age. Thick siltstone and shale of the Borden Group-New Albany Shale zone and Maquoketa Group rocks should be suitable for repository development

  2. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  3. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  4. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  5. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  6. Strontium isotope stratigraphy of the Pelotas Basin

    Energy Technology Data Exchange (ETDEWEB)

    Zerfass, Geise de Santana dos Anjos, E-mail: geise.zerfass@petrobras.com.br [Petroleo Brasileiro S.A. (PETROBRAS/CENPES/PDGEO/BPA), Rio de Janeiro, RJ (Brazil). Centro de Pesquisas e Desenvolvimento Leopoldo Americo Miguez de Mello; Chemale Junior, Farid, E-mail: fchemale@unb.br [Universidade de Brasilia (UnB), DF (Brazil). Instituto de Geociencias; Moura, Candido Augusto Veloso, E-mail: candido@ufpa.br [Universidade Federal do Para (UFPA), Belem, PA (Brazil). Centro de Geociencias. Dept. de Geoquimica e Petrologia; Costa, Karen Badaraco, E-mail: karen.costa@usp.br [Instituto Oceanografico, Sao Paulo, SP (Brazil); Kawashita, Koji, E-mail: koji@usp.br [Unversidade de Sao Paulo (USP), SP (Brazil). Centro de Pesquisas Geocronologicas

    2014-07-01

    Strontium isotope data were obtained from foraminifera shells of the Pelotas Basin Tertiary deposits to facilitate the refinement of the chronostratigraphic framework of this section. This represents the first approach to the acquisition of numerical ages for these strata. Strontium isotope stratigraphy allowed the identification of eight depositional hiatuses in the Eocene-Pliocene section, here classified as disconformities and a condensed section. The reconnaissance of depositional gaps based on confident age assignments represents an important advance considering the remarkably low chronostratigraphic resolution in the Cenozoic section of the Pelotas Basin. The recognition of hiatuses that match hiatuses is based on biostratigraphic data, as well as on global events. Furthermore, a substantial increase in the sedimentation rate of the upper Miocene section was identified. Paleotemperature and productivity trends were identified based on oxygen and carbon isotope data from the Oligocene-Miocene section, which are coherent with worldwide events, indicating the environmental conditions during sedimentation. (author)

  7. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  8. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  9. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  10. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  11. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  12. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  13. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  14. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  15. Stratigraphy of the south polar region of Ganymede

    Science.gov (United States)

    Dehon, R. A.

    1987-01-01

    A preliminary assessment is made of the stratigraphy and geology in the south polar region of the Jovian satellite, Ganymede. Geologic mapping is based on inspection of Voyager images and compilation on an airbrush base map at a scale of 1:5M. Illumination and resolution vary greatly in the region. Approximately half of the quadripole is beyond the terminator. Low angle illumination over a large part of the area precludes distinction of some units by albedo characteristics. Several types of grooved terrain and groove related terrain occur in the southern polar region. Grooves typically occur in straight to curvilinear sets or lanes. Bright lanes and grooved lanes intersect at high angles outlining polygons of dark cratered terrain. Groove sets exhibit a range of ages as shown by superposition or truncation and by crater superposition ages.

  16. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  17. New observations on the stratigraphy and radiocarbon dates at the Cross Creek site, Opito, Coromandel Peninsula

    International Nuclear Information System (INIS)

    Furey, L.; Petchey, F.; Sewell, B.; Green, R.

    2008-01-01

    This paper re-examines stratigraphy and radiocarbon dates at Cross Creek in Sarah's Gully. Three new radiocarbon dates are presented for Layer 9, the earliest, and previously undated, occupation. This investigation is part of a programme of archaeological work being carried out on the Coromandel Peninsula. (author). 51 refs., 4 figs., 3 tabs

  18. Stratigraphy and uranium deposits, Lisbon Valley district, San Juan County, Utah

    International Nuclear Information System (INIS)

    Huber, G.C.

    1980-01-01

    Uranium occurrences are scattered throughout southeastern Utah in the lower sandstones of the Triassic Chinle Formation. The Lisbon Valley district, however, is the only area with uranium deposits of substantial size. The stratigraphy of the Lisbon Valley district was investigated to determine the nature of the relationship between the mineralized areas and the lower Chinle sandstones. The geochemistry of the Lisbon Valley uranium deposits indicates a possible district-wide zoning. Interpretation of the elemental zoning associated with individual ore bodies suggests that humates overtaken by a geochemical oxidation-reduction interface may have led to formation of the uranium deposits. Refs

  19. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  20. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  2. Sequence stratigraphy and uranium metallogenic characteristics in Xinminpu group, lower cretaceous, in Gongpoquan basin

    International Nuclear Information System (INIS)

    Gong Binli; Wang Jingping

    2006-01-01

    Characteristics of sequence stratigraphy, the distribution and the geologic time of sequence, boundary features and internal composition, the sedimentary facies, as well as the characteristics of interlayer oxidation zone and U-mineralization are expounded. It is suggested that the Gongpoquan area is of certain prospect for U-prospecting, and uranium metallogenic conditions are not favorable in the Beiluotuoquan area, and the Nalinsuhuai area has no prospect either. (authors)

  3. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  4. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  5. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  6. Global mapping of stratigraphy of an old-master painting using sparsity-based terahertz reflectometry.

    Science.gov (United States)

    Dong, Junliang; Locquet, Alexandre; Melis, Marcello; Citrin, D S

    2017-11-08

    The process by which art paintings are produced typically involves the successive applications of preparatory and paint layers to a canvas or other support; however, there is an absence of nondestructive modalities to provide a global mapping of the stratigraphy, information that is crucial for evaluation of its authenticity and attribution, for insights into historical or artist-specific techniques, as well as for conservation. We demonstrate sparsity-based terahertz reflectometry can be applied to extract a detailed 3D mapping of the layer structure of the 17th century easel painting Madonna in Preghiera by the workshop of Giovanni Battista Salvi da Sassoferrato, in which the structure of the canvas support, the ground, imprimatura, underpainting, pictorial, and varnish layers are identified quantitatively. In addition, a hitherto unidentified restoration of the varnish has been found. Our approach unlocks the full promise of terahertz reflectometry to provide a global and detailed account of an easel painting's stratigraphy by exploiting the sparse deconvolution, without which terahertz reflectometry in the past has only provided a meager tool for the characterization of paintings with paint-layer thicknesses smaller than 50 μm. The proposed modality can also be employed across a broad range of applications in nondestructive testing and biomedical imaging.

  7. Mars north polar deposits: stratigraphy, age, and geodynamical response.

    Science.gov (United States)

    Phillips, Roger J; Zuber, Maria T; Smrekar, Suzanne E; Mellon, Michael T; Head, James W; Tanaka, Kenneth L; Putzig, Nathaniel E; Milkovich, Sarah M; Campbell, Bruce A; Plaut, Jeffrey J; Safaeinili, Ali; Seu, Roberto; Biccari, Daniela; Carter, Lynn M; Picardi, Giovanni; Orosei, Roberto; Mohit, P Surdas; Heggy, Essam; Zurek, Richard W; Egan, Anthony F; Giacomoni, Emanuele; Russo, Federica; Cutigni, Marco; Pettinelli, Elena; Holt, John W; Leuschen, Carl J; Marinangeli, Lucia

    2008-05-30

    The Shallow Radar (SHARAD) on the Mars Reconnaissance Orbiter has imaged the internal stratigraphy of the north polar layered deposits of Mars. Radar reflections within the deposits reveal a laterally continuous deposition of layers, which typically consist of four packets of finely spaced reflectors separated by homogeneous interpacket regions of nearly pure ice. The packet/interpacket structure can be explained by approximately million-year periodicities in Mars' obliquity or orbital eccentricity. The observed approximately 100-meter maximum deflection of the underlying substrate in response to the ice load implies that the present-day thickness of an equilibrium elastic lithosphere is greater than 300 kilometers. Alternatively, the response to the load may be in a transient state controlled by mantle viscosity. Both scenarios probably require that Mars has a subchondritic abundance of heat-producing elements.

  8. Compositional stratigraphy of crustal material from near-infrared spectra

    International Nuclear Information System (INIS)

    Pieters, C.M.

    1987-01-01

    An Earth-based telescopic program to acquire near-infrared spectra of freshly exposed lunar material now contains data for 17 large impact craters with central peaks. Noritic, gabbroic, anorthositic and troctolitic rock types can be distinguished for areas within these large craters from characteristic absorptions in individual spectra of their walls and central peaks. Norites dominate the upper lunar crust while the deeper crustal zones also contain significant amounts of gabbros and anorthosites. Data for material associated with large craters indicate that not only is the lunar crust highly heterogeneous across the nearside, but that the compositional stratigraphy of the lunar crust is nonuniform. Crustal complexity should be expected for other planetary bodies, which should be studied using high spatial and spectral resolution data in and around large impact craters

  9. Compositional stratigraphy of crustal material from near-infrared spectra

    Science.gov (United States)

    Pieters, Carle M.

    1987-01-01

    An Earth-based telescopic program to acquire near-infrared spectra of freshly exposed lunar material now contains data for 17 large impact craters with central peaks. Noritic, gabbroic, anorthositic and troctolitic rock types can be distinguished for areas within these large craters from characteristic absorptions in individual spectra of their walls and central peaks. Norites dominate the upper lunar crust while the deeper crustal zones also contain significant amounts of gabbros and anorthosites. Data for material associated with large craters indicate that not only is the lunar crust highly heterogeneous across the nearside, but that the compositional stratigraphy of the lunar crust is nonuniform. Crustal complexity should be expected for other planetary bodies, which should be studied using high spatial and spectral resolution data in and around large impact craters.

  10. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  11. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  12. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  13. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  14. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  15. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  16. Structural styles and zircon ages of the South Tianshan accretionary complex, Atbashi Ridge, Kyrgyzstan: Insights for the anatomy of ocean plate stratigraphy and accretionary processes

    Science.gov (United States)

    Sang, Miao; Xiao, Wenjiao; Orozbaev, Rustam; Bakirov, Apas; Sakiev, Kadyrbek; Pak, Nikolay; Ivleva, Elena; Zhou, Kefa; Ao, Songjian; Qiao, Qingqing; Zhang, Zhixin

    2018-03-01

    The anatomy of an ancient accretionary complex has a significance for a better understanding of the tectonic processes of accretionary orogens and complex because of its complicated compositions and strong deformation. With a thorough structural and geochronological study of a fossil accretionary complex in the Atbashi Ridge, South Tianshan (Kyrgyzstan), we analyze the structure and architecture of ocean plate stratigraphy in the western Central Asian Orogenic Belt. The architecture of the Atbashi accretionary complex is subdivisible into four lithotectonic assemblages, some of which are mélanges with "block-in-matrix" structure: (1) North Ophiolitic Mélange; (2) High-pressure (HP)/Ultra-high-pressure (UHP) Metamorphic Assemblage; (3) Coherent & Mélange Assemblage; and (4) South Ophiolitic Mélange. Relationships between main units are tectonic contacts presented by faults. The major structures and lithostratigraphy of these units are thrust-fold nappes, thrusted duplexes, and imbricated ocean plate stratigraphy. All these rock units are complicatedly stacked in 3-D with the HP/UHP rocks being obliquely southwestward extruded. Detrital zircon ages of meta-sediments provide robust constraints on their provenance from the Ili-Central Tianshan Arc. The isotopic ages of the youngest components of the four units are Late Permian, Early-Middle Triassic, Early Carboniferous, and Early Triassic, respectively. We present a new tectonic model of the South Tianshan; a general northward subduction polarity led to final closure of the South Tianshan Ocean in the End-Permian to Late Triassic. These results help to resolve the long-standing controversy regarding the subduction polarity and the timing of the final closure of the South Tianshan Ocean. Finally, our work sheds lights on the use of ocean plate stratigraphy in the analysis of the tectonic evolution of accretionary orogens.

  17. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  18. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  19. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  20. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  1. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  2. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  3. Stratigraphy of amethyst geode-bearing lavas and fault-block structures of the Entre Rios mining district, Paraná volcanic province, southern Brazil

    Directory of Open Access Journals (Sweden)

    LÉO A. HARTMANN

    2014-03-01

    Full Text Available The Entre Rios mining district produces a large volume of amethyst geodes in underground mines and is part of the world class deposits in the Paraná volcanic province of South America. Two producing basalt flows are numbered 4 and 5 in the lava stratigraphy. A total of seven basalt flows and one rhyodacite flow are present in the district. At the base of the stratigraphy, beginning at the Chapecó river bed, two basalt flows are Esmeralda, low-Ti type. The third flow in the sequence is a rhyodacite, Chapecó type, Guarapuava subtype. Above the rhyodacite flow, four basalt flows are Pitanga, high-Ti type including the two mineralized flows; only the topmost basalt in the stratigraphy is a Paranapanema, intermediate-Ti type. Each individual flow is uniquely identified from its geochemical and gamma-spectrometric properties. The study of several sections in the district allowed for the identification of a fault-block structure. Blocks are elongated NW and the block on the west side of the fault was downthrown. This important structural characterization of the mining district will have significant consequences in the search for new amethyst geode deposits and in the understanding of the evolution of the Paraná volcanic province.

  4. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  5. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  6. Geologic Framework Model Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the

  7. Geologic Framework Model Analysis Model Report

    International Nuclear Information System (INIS)

    Clayton, R.

    2000-01-01

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and

  8. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  9. Regional stratigraphy and geologic history of Mare Crisium

    Science.gov (United States)

    Head, J. W., III; Adams, J. B.; Mccord, T. B.; Pieters, C.; Zisk, S.

    1978-01-01

    Remote sensing and Luna 24 sample data are used to develop a summary of the regional stratigraphy and geologic history of Mare Crisium. Laboratory spectra of Luna 24 samples, telescopic reflectance spectra in the 0.3 to 1.1 micron range and orbital X-ray data have identified three major basalt groups in the region. Group I soil is derived from iron- and magnesium-rich titaniferous basalts and was apparently emplaced over the majority of the basin, however is presently exposed as a shelf in the southwest part. Group II soils, derived from very low titanium ferrobasalts, were emplaced in two stages subsequent to Group I emplacement and now appear as part of the outer shelf and topographic annulus. Subsidence of the basin interior preceded and continued after the emplacement of the third basalt group, a soil derived from a low titanium ferrobasalt. The Luna 24 site is found to be within a patch of Group II material.

  10. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  11. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  12. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  13. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  14. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  15. Stratigraphy of the late Cenozoic sediments beneath the 216-B and C crib facilities

    International Nuclear Information System (INIS)

    Fecht, K.R.; Last, G.V.; Marratt, M.C.

    1979-02-01

    The stratigraphy of the late Cenozoic sediments beneath the 216-B and C Crib Facilities is presented as lithofacies cross sections and is based on textural variations of the sedimentary sequence lying above the basalt bedrock. The primary source of data in this study is geologic information obtained from well drilling operations and geophysical logging. Stratigraphic interpretations are based primarily on textural analysis and visual examination of sediment samples and supplemented by drillers logs and geophysical logs

  16. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  17. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  18. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  19. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  20. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  1. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  2. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  4. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  5. Integrated stratigraphy of the Jurassic-Cretaceous sequences of the Kurovice Quarry, Outer Western Carpathians: correlations and tectonic implications

    Czech Academy of Sciences Publication Activity Database

    Pruner, Petr; Schnabl, Petr; Čížková, Kristýna; Elbra, Tiiu; Kdýr, Šimon; Svobodová, Andrea; Reháková, D.

    2017-01-01

    Roč. 120 (2017), s. 216-216 ISSN 1017-8880. [International Symposium on the Cretaceous /10./. 21.08.2017-26.08.2017, Vienna] R&D Projects: GA ČR(CZ) GA16-09979S Institutional support: RVO:67985831 Keywords : stratigraphy * Jurassic-Cretaceous sequences * Western Carpathians Subject RIV: DB - Geology ; Mineralogy

  6. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  8. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  9. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  10. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  12. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  13. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  14. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  15. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  16. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  17. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  18. Alluvial flash-flood stratigraphy of a large dryland river: the Luni River, Thar Desert, Western India

    Science.gov (United States)

    Carling, Paul; Leclair, Suzanne; Robinson, Ruth

    2017-04-01

    Detailed descriptions of the fluvial architecture of large dryland rivers are few, which hinders the understanding of stratigraphic development in aggradational settings. The aim of this study was to obtain new generic insight of the fluvial dynamics and resultant stratigraphy of such a river. The novelty of this investigation is that an unusually extensive and deep section across a major active dryland river was logged and the dated stratigraphy related to the behaviour of the discharge regimen. The results should help improve understanding of the stratigraphic development in modern dryland rivers and in characterizing oil, gas and groundwater reservoirs in the dryland geological record more generally. The Luni River is the largest river in the Thar desert, India, but yet details of the channel stratigraphy are sparse. Discharges can reach 14,000 m3s-1 but the bed is dry most of the year. GPS positioning and mm-resolution surveys within a 700m long, 5m deep trench enabled logging and photography of the strata associations, dated using optically-stimulated luminescence (OSL). The deposits consist of planar, sandy, upper-stage plane bed lamination and low-angle stratification, sandwiching less-frequent dune trough cross-sets. Mud clasts are abundant at any elevation. Water-ripple cross-sets or silt-clay layers occur rarely, usually near the top of sections. Aeolian dune cross-sets also appear sparsely at higher elevations. Consequently, the majority of preserved strata are due to supercritical flows. Localized deep scour causes massive collapse and soft-sediment deformation. Scour holes are infilled by rapidly-deposited massive sands adjacent to older bedded-deposits. Within bedform phase diagrams, estimated hydraulic parameters indicate a dominance of the upper-stage plane bed state, but the presence of dune cross-sets is also related to the flood hydrograph. Repeated deep scour results in units of deposition of different OSL ages (50 to 500 years BP) found at

  19. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  20. Geomorphology, facies architecture, and high-resolution, non-marine sequence stratigraphy in avulsion deposits, Cumberland Marshes, Saskatchewan

    Science.gov (United States)

    Farrell, K. M.

    2001-02-01

    This paper demonstrates field relationships between landforms, facies, and high-resolution sequences in avulsion deposits. It defines the building blocks of a prograding avulsion sequence from a high-resolution sequence stratigraphy perspective, proposes concepts in non-marine sequence stratigraphy and flood basin evolution, and defines the continental equivalent to a parasequence. The geomorphic features investigated include a distributary channel and its levee, the Stage I crevasse splay of Smith et al. (Sedimentology, vol. 36 (1989) 1), and the local backswamp. Levees and splays have been poorly studied in the past, and three-dimensional (3D) studies are rare. In this study, stratigraphy is defined from the finest scale upward and facies are mapped in 3D. Genetically related successions are identified by defining a hierarchy of bounding surfaces. The genesis, architecture, geometry, and connectivity of facies are explored in 3D. The approach used here reveals that avulsion deposits are comparable in process, landform, facies, bounding surfaces, and scale to interdistributary bayfill, i.e. delta lobe deposits. Even a simple Stage I splay is a complex landform, composed of several geomorphic components, several facies and many depositional events. As in bayfill, an alluvial ridge forms as the feeder crevasse and its levees advance basinward through their own distributary mouth bar deposits to form a Stage I splay. This produces a shoestring-shaped concentration of disconnected sandbodies that is flanked by wings of heterolithic strata, that join beneath the terminal mouth bar. The proposed results challenge current paradigms. Defining a crevasse splay as a discrete sandbody potentially ignores 70% of the landform's volume. An individual sandbody is likely only a small part of a crevasse splay complex. The thickest sandbody is a terminal, channel associated feature, not a sheet that thins in the direction of propagation. The three stage model of splay evolution

  1. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  2. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  3. The INTIMATE event stratigraphy and recommendations for its use

    Science.gov (United States)

    Rasmussen, Sune O.

    2014-05-01

    The North Atlantic INTIMATE (INtegration of Ice-core, MArine and TErrestrial records) group has previously recommended an Event Stratigraphy approach for the synchronisation of records of the Last Termination using the Greenland ice core records as the regional stratotypes. A key element of these protocols has been the formal definition of numbered Greenland Stadials (GS) and Greenland Interstadials (GI) within the past glacial period as the Greenland expressions of the characteristic Dansgaard-Oeschger events that represent cold and warm phases of the North Atlantic region, respectively. Using a recent synchronization of the NGRIP, GRIP, and GISP2 ice cores that allows the parallel analysis of all three records on a common time scale, we here present an extension of the GS/GI stratigraphic template to the entire glacial period. In addition to the well-known sequence of Dansgaard-Oeschger events that were first defined and numbered in the ice core records more than two decades ago, a number of short-lived climatic oscillations have been identified in the three synchronized records. Some of these events have been observed in other studies, but we here propose a consistent scheme for discriminating and naming all the significant climatic events of the last glacial period that are represented in the Greenland ice cores. In addition to presenting the updated event stratigraphy, we make a series of recommendations on how to refer to these periods in a way that promotes unambiguous comparison and correlation between different proxy records, providing a more secure basis for investigating the dynamics and fundamental causes of these climatic perturbations. The work presented is a part of a manuscript under review for publication in Quaternary Science Reviews. Author team: S.O. Rasmussen, M. Bigler, S.P.E. Blockley, T. Blunier, S.L. Buchardt, H.B. Clausen, I. Cvijanovic, D. Dahl-Jensen, S.J. Johnsen, H. Fischer, V. Gkinis, M. Guillevic, W.Z. Hoek, J.J. Lowe, J. Pedro, T

  4. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  5. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  6. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  7. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  8. Radar imaging of glaciovolcanic stratigraphy, Mount Wrangell caldera, Alaska - Interpretation model and results

    Science.gov (United States)

    Clarke, Garry K. C.; Cross, Guy M.; Benson, Carl S.

    1989-01-01

    Glaciological measurements and an airborne radar sounding survey of the glacier lying in Mount Wrangell caldera raise many questions concerning the glacier thermal regime and volcanic history of Mount Wrangell. An interpretation model has been developed that allows the depth variation of temperature, heat flux, pressure, density, ice velocity, depositional age, and thermal and dielectric properties to be calculated. Some predictions of the interpretation model are that the basal ice melting rate is 0.64 m/yr and the volcanic heat flux is 7.0 W/sq m. By using the interpretation model to calculate two-way travel time and propagation losses, radar sounding traces can be transformed to give estimates of the variation of power reflection coefficient as a function of depth and depositional age. Prominent internal reflecting zones are located at depths of approximately 59-91m, 150m, 203m, and 230m. These internal reflectors are attributed to buried horizons of acidic ice, possibly intermixed with volcanic ash, that were deposited during past eruptions of Mount Wrangell.

  9. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  10. Linking Backbarrier Lacustrine Stratigraphy with Spatial Dynamics of Shoreline Retreat in a Rapidly Subsiding Region of the Mississippi River Delta

    Science.gov (United States)

    Dietz, M.; Liu, K. B.; Bianchette, T. A.; Yao, Q.

    2017-12-01

    The shoreline along the northern Gulf of Mexico is rapidly retreating as coastal features of abandoned Mississippi River delta complexes erode and subside. Bay Champagne is located in the Caminada-Moreau headland, a region of shoreline west of the currently active delta that has one of the highest rates of retreat and land loss. As a result, this site has transitioned from a stable, circular inland lake several kilometers from the shore to a frequently perturbed, semi-circular backbarrier lagoon, making it ideal to study the environmental effects of progressive land loss. Analyses of clastic layers in a series of sediment cores collected at this site over the past decade indicate the lake was less perturbed in the past and has become increasingly more sensitive to marine incursion events caused by tropical cyclones. Geochemical and pollen analyses of these cores also reveal profound changes in environmental and chemical conditions in Bay Champagne over the past century as the shoreline has retreated. Through relating stratigraphy to spatial changes observed from satellite imagery, this study attempts to identify the tipping point at which Bay Champagne began the transition from an inland lake to a backbarrier environment, and to determine the rate at which this transition occurred. Results will be used to develop a model of the environmental transition experienced by a rapidly retreating coastline and to predict how other regions of the Mississippi River deltaic system could respond to future shoreline retreat.

  11. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  12. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  13. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  14. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  15. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  16. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  17. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  18. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  19. Morphology, stratigraphy and oxygen isotope composition of fossil glacier ice at Ledyanaya Gora, Northwest Siberia, Russia

    International Nuclear Information System (INIS)

    Vaikmaee, R.; Michel, F.A.; Solomatin, V.I.

    1993-01-01

    Studies of the stratigraphy, sedimentology, structure and isotope composition of a buried massive ice body and its encompassing sediments at Ledyanaya Gora in northwestern Siberia demonstrate that the ice is relict glacier ice, probably emplaced during the Early Weichselian. Characteristics of this ice body should serve as a guide for the identification of other relict buried glacier ice bodies in permafrost regions. 31 refs., 9 figs., 2 tabs

  20. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  1. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  2. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  3. River meander modeling and confronting uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Posner, Ari J. (University of Arizona Tucson, AZ)

    2011-05-01

    This study examines the meandering phenomenon as it occurs in media throughout terrestrial, glacial, atmospheric, and aquatic environments. Analysis of the minimum energy principle, along with theories of Coriolis forces (and random walks to explain the meandering phenomenon) found that these theories apply at different temporal and spatial scales. Coriolis forces might induce topological changes resulting in meandering planforms. The minimum energy principle might explain how these forces combine to limit the sinuosity to depth and width ratios that are common throughout various media. The study then compares the first order analytical solutions for flow field by Ikeda, et al. (1981) and Johannesson and Parker (1989b). Ikeda's et al. linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g., cohesiveness, stratigraphy, or vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of a meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations several measures are formulated in order to determine which of the resulting planforms is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model.

  4. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  5. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  6. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  7. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  8. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  9. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  10. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  11. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  12. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  13. Historical tephra-stratigraphy of the Cosiguina Volcano (Western Nicaragua)

    International Nuclear Information System (INIS)

    Hradecky, Petr; Rapprich, Vladislav

    2008-01-01

    New detailed geological field studies and 14 C dating of the Cosiguina Volcano (westernmost Nicaragua) have allowed to reconstruct a geological map of the volcano and to establish a recent stratigraphy, including three historical eruptions. Five major sequences are represented. I: pyroclastic flows around 1500 AD, II: pyroclastic flows, scoria and pumice flows and surges, III: pyroclastic deposits related to a littoral crater, IV: pyroclastic flows related to 1709 AD eruption, and finally, V: pyroclastic deposits corresponding to the cataclysmic 1835 AD phreatic, phreatomagmatic and subplinian eruption, which seems to be relatively small-scale in comparison with the preceding historical eruptions. The pulsating geochemical character of the pyroclastic rocks in the last five centuries has been documented. The beginning of every eruption is marked by increasing contents of silica and Zr. Based on that, regardless of present-day volcanic repose, the entire Cosiguina Peninsula should be considered as a very hazardous volcanic area. (author)

  14. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  15. Short-Term Wind Speed Prediction Using EEMD-LSSVM Model

    Directory of Open Access Journals (Sweden)

    Aiqing Kang

    2017-01-01

    Full Text Available Hybrid Ensemble Empirical Mode Decomposition (EEMD and Least Square Support Vector Machine (LSSVM is proposed to improve short-term wind speed forecasting precision. The EEMD is firstly utilized to decompose the original wind speed time series into a set of subseries. Then the LSSVM models are established to forecast these subseries. Partial autocorrelation function is adopted to analyze the inner relationships between the historical wind speed series in order to determine input variables of LSSVM models for prediction of every subseries. Finally, the superposition principle is employed to sum the predicted values of every subseries as the final wind speed prediction. The performance of hybrid model is evaluated based on six metrics. Compared with LSSVM, Back Propagation Neural Networks (BP, Auto-Regressive Integrated Moving Average (ARIMA, combination of Empirical Mode Decomposition (EMD with LSSVM, and hybrid EEMD with ARIMA models, the wind speed forecasting results show that the proposed hybrid model outperforms these models in terms of six metrics. Furthermore, the scatter diagrams of predicted versus actual wind speed and histograms of prediction errors are presented to verify the superiority of the hybrid model in short-term wind speed prediction.

  16. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  17. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  18. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  19. [Application of ARIMA model on prediction of malaria incidence].

    Science.gov (United States)

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  20. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  1. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  2. Prediction of lithium-ion battery capacity with metabolic grey model

    International Nuclear Information System (INIS)

    Chen, Lin; Lin, Weilong; Li, Junzi; Tian, Binbin; Pan, Haihong

    2016-01-01

    Given the popularity of Lithium-ion batteries in EVs (electric vehicles), predicting the capacity quickly and accurately throughout a battery's full life-time is still a challenging issue for ensuring the reliability of EVs. This paper proposes an approach in predicting the varied capacity with discharge cycles based on metabolic grey theory and consider issues from two perspectives: 1) three metabolic grey models will be presented, including MGM (metabolic grey model), MREGM (metabolic Residual-error grey model), and MMREGM (metabolic Markov-residual-error grey model); 2) the universality of these models will be explored under different conditions (such as various discharge rates and temperatures). Furthermore, the research findings in this paper demonstrate the excellent performance of the prediction depending on the three models; however, the precision of the MREGM model is inferior compared to the others. Therefore, we have obtained the conclusion in which the MGM model and the MMREGM model have excellent performances in predicting the capacity under a variety of load conditions, even using few data points for modeling. Also, the universality of the metabolic grey prediction theory is verified by predicting the capacity of batteries under different discharge rates and different temperatures. - Highlights: • The metabolic mechanism is introduced in a grey system for capacity prediction. • Three metabolic grey models are presented and studied. • The universality of these models under different conditions is assessed. • A few data points are required for predicting the capacity with these models.

  3. Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.

    Science.gov (United States)

    Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander

    2017-11-01

    Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A Grey NGM(1,1,k Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Directory of Open Access Journals (Sweden)

    Xiaojun Guo

    2014-01-01

    Full Text Available Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1,k self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1,k model. The traditional grey model’s weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1,k self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  5. Improving Predictive Modeling in Pediatric Drug Development: Pharmacokinetics, Pharmacodynamics, and Mechanistic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Slikker, William; Young, John F.; Corley, Rick A.; Dorman, David C.; Conolly, Rory B.; Knudsen, Thomas; Erstad, Brian L.; Luecke, Richard H.; Faustman, Elaine M.; Timchalk, Chuck; Mattison, Donald R.

    2005-07-26

    A workshop was conducted on November 18?19, 2004, to address the issue of improving predictive models for drug delivery to developing humans. Although considerable progress has been made for adult humans, large gaps remain for predicting pharmacokinetic/pharmacodynamic (PK/PD) outcome in children because most adult models have not been tested during development. The goals of the meeting included a description of when, during development, infants/children become adultlike in handling drugs. The issue of incorporating the most recent advances into the predictive models was also addressed: both the use of imaging approaches and genomic information were considered. Disease state, as exemplified by obesity, was addressed as a modifier of drug pharmacokinetics and pharmacodynamics during development. Issues addressed in this workshop should be considered in the development of new predictive and mechanistic models of drug kinetics and dynamics in the developing human.

  6. Evaluating process origins of sand-dominated fluvial stratigraphy

    Science.gov (United States)

    Chamberlin, E.; Hajek, E. A.

    2015-12-01

    Sand-dominated fluvial stratigraphy is often interpreted as indicating times of relatively slow subsidence because of the assumption that fine sediment (silt and clay) is reworked or bypassed during periods of low accommodation. However, sand-dominated successions may instead represent proximal, coarse-grained reaches of paleo-river basins and/or fluvial systems with a sandy sediment supply. Differentiating between these cases is critical for accurately interpreting mass-extraction profiles, basin-subsidence rates, and paleo-river avulsion and migration behavior from ancient fluvial deposits. We explore the degree to which sand-rich accumulations reflect supply-driven progradation or accommodation-limited reworking, by re-evaluating the Castlegate Sandstone (Utah, USA) and the upper Williams Fork Formation (Colorado, USA) - two Upper Cretaceous sandy fluvial deposits previously interpreted as having formed during periods of relatively low accommodation. Both units comprise amalgamated channel and bar deposits with minor intra-channel and overbank mudstones. To constrain relative reworking, we quantify the preservation of bar deposits in each unit using detailed facies and channel-deposit mapping, and compare bar-deposit preservation to expected preservation statistics generated with object-based models spanning a range of boundary conditions. To estimate the grain-size distribution of paleo-sediment input, we leverage results of experimental work that shows both bed-material deposits and accumulations on the downstream side of bars ("interbar fines") sample suspended and wash loads of active flows. We measure grain-size distributions of bar deposits and interbar fines to reconstruct the relative sandiness of paleo-sediment supplies for both systems. By using these novel approaches to test whether sand-rich fluvial deposits reflect river systems with accommodation-limited reworking and/or particularly sand-rich sediment loads, we can gain insight into large

  7. Volcano stratigraphy interpretation of Mamuju area based on Landsat-8 imagery analysis

    International Nuclear Information System (INIS)

    Frederikus Dian Indrastomo; I Gde Sukadana; Dhatu Kamajati; Asep Saepuloh; Agus Handoyo Harsolumakso

    2015-01-01

    Mamuju and its surrounding area are constructed mainly by volcanic rocks. Volcanoclastic sedimentary rocks and limestones are laid above the volcanic rocks. Volcanic activities create some unique morphologies such as craters, lava domes, and pyroclastic flow paths as their volcanic products. These products are identified from their circular features characters on Landsat-8 imagery. Geometric and atmospheric corrections had been done, a visual interpretation on Landsat-8 imagery was conducted to identify structure, geomorphology, and geological condition of the area. Regional geological structures show trend to southeast – northwest direction which is affects the formation of Adang volcano. Geomorphology of the area are classified into 16 geomorphology units based on their genetic aspects, i.e Sumare fault block ridge, Mamuju cuesta ridge, Adang eruption crater, Labuhan Ranau eruption crater, Sumare eruption crater, Ampalas volcanic cone, Adang lava dome, Labuhan Ranau intrusion hill, Adang pyroclastic flow ridge, Sumare pyroclastic flow ridge, Adang volcanic remnant hills, Malunda volcanic remnant hills, Talaya volcanic remnant hills, Tapalang karst hills, Mamuju alluvium plains, and Karampuang reef terrace plains. Based on the Landsat-8 imagery interpretation result and field confirmation, the geology of Mamuju area is divided into volcanic rocks and sedimentary rocks. There are two groups of volcanic rocks; Talaya complex and Mamuju complex. The Talaya complex consists of Mambi, Malunda, and Kalukku volcanic rocks with andesitic composition, while Mamuju complex consist of Botteng, Ahu, Tapalang, Adang, Ampalas, Sumare, and Labuhan Ranau volcanic rocks with andesite to leucitic basalt composition. The volcano stratigraphy of Mamuju area was constructed based on its structure, geomorphology and lithology distribution analysis. Volcano stratigraphy of Mamuju area is classified into Khuluk Talaya and Khuluk Mamuju. The Khuluk Talaya consists of Gumuk Mambi, Gumuk

  8. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  9. Integrated stratigraphy and astronomical tuning of Smirra cores, lower Eocene, Umbria-Marche basin, Italy.

    Science.gov (United States)

    Lauretano, Vittoria; Turtù, Antonio; Hilgen, Frits; Galeotti, Simone; Catanzariti, Rita; Reichart, Gert Jan; Lourens, Lucas J.

    2016-04-01

    The early Eocene represents an ideal case study to analyse the impact of increase global warming on the ocean-atmosphere system. During this time interval, the Earth's surface experienced a long-term warming trend that culminated in a period of sustained high temperatures called the Early Eocene Climatic Optimum (EECO). These perturbations of the ocean-atmosphere system involved the global carbon cycle and global temperatures and have been linked to orbital forcing. Unravelling this complex climatic system strictly depends on the availability of high-quality suitable geological records and accurate age models. However, discrepancies between the astrochronological and radioisotopic dating techniques complicate the development of a robust time scale for the early Eocene (49-54 Ma). Here we present the first magneto-, bio-, chemo- and cyclostratigraphic results of the drilling of the land-based Smirra section, in the Umbria Marche Basin. The sediments recovered at Smirra provide a remarkably well-preserved and undisturbed succession of the early Palaeogene pelagic stratigraphy. Bulk stable carbon isotope and X-Ray Fluorescence (XRF) scanning records are employed in the construction of an astronomically tuned age model for the time interval between ~49 and ~54 Ma based on the tuning to long-eccentricity. These results are then compared to the astronomical tuning of the benthic carbon isotope record of ODP Site 1263 to evaluate the different age model options and improve the time scale of the early Eocene by assessing the precise number of eccentricity-related cycles comprised in this critical interval.

  10. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  11. Robust predictions of the interacting boson model

    International Nuclear Information System (INIS)

    Casten, R.F.; Koeln Univ.

    1994-01-01

    While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data

  12. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  13. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  14. Interpreting Disruption Prediction Models to Improve Plasma Control

    Science.gov (United States)

    Parsons, Matthew

    2017-10-01

    In order for the tokamak to be a feasible design for a fusion reactor, it is necessary to minimize damage to the machine caused by plasma disruptions. Accurately predicting disruptions is a critical capability for triggering any mitigative actions, and a modest amount of attention has been given to efforts that employ machine learning techniques to make these predictions. By monitoring diagnostic signals during a discharge, such predictive models look for signs that the plasma is about to disrupt. Typically these predictive models are interpreted simply to give a `yes' or `no' response as to whether a disruption is approaching. However, it is possible to extract further information from these models to indicate which input signals are more strongly correlated with the plasma approaching a disruption. If highly accurate predictive models can be developed, this information could be used in plasma control schemes to make better decisions about disruption avoidance. This work was supported by a Grant from the 2016-2017 Fulbright U.S. Student Program, administered by the Franco-American Fulbright Commission in France.

  15. In silico modeling to predict drug-induced phospholipidosis

    International Nuclear Information System (INIS)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G.; Sadrieh, Nakissa

    2013-01-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL

  16. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  17. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  18. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  19. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  20. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  1. Predictive modeling of neuroanatomic structures for brain atrophy detection

    Science.gov (United States)

    Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming

    2010-03-01

    In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.

  2. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  3. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Directory of Open Access Journals (Sweden)

    Michael King

    Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  4. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  5. Predictive modeling of coupled multi-physics systems: I. Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2014-01-01

    Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially

  6. Comparison of the models of financial distress prediction

    Directory of Open Access Journals (Sweden)

    Jiří Omelka

    2013-01-01

    Full Text Available Prediction of the financial distress is generally supposed as approximation if a business entity is closed on bankruptcy or at least on serious financial problems. Financial distress is defined as such a situation when a company is not able to satisfy its liabilities in any forms, or when its liabilities are higher than its assets. Classification of financial situation of business entities represents a multidisciplinary scientific issue that uses not only the economic theoretical bases but interacts to the statistical, respectively to econometric approaches as well.The first models of financial distress prediction have originated in the sixties of the 20th century. One of the most known is the Altman’s model followed by a range of others which are constructed on more or less conformable bases. In many existing models it is possible to find common elements which could be marked as elementary indicators of potential financial distress of a company. The objective of this article is, based on the comparison of existing models of prediction of financial distress, to define the set of basic indicators of company’s financial distress at conjoined identification of their critical aspects. The sample defined this way will be a background for future research focused on determination of one-dimensional model of financial distress prediction which would subsequently become a basis for construction of multi-dimensional prediction model.

  7. A model for predicting lung cancer response to therapy

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Hines, J. Wesley; Kupelian, Patrick A.; Langen, Katja M.; Meeks, Sanford L.; Scaperoth, Daniel D.

    2007-01-01

    Purpose: Volumetric computed tomography (CT) images acquired by image-guided radiation therapy (IGRT) systems can be used to measure tumor response over the course of treatment. Predictive adaptive therapy is a novel treatment technique that uses volumetric IGRT data to actively predict the future tumor response to therapy during the first few weeks of IGRT treatment. The goal of this study was to develop and test a model for predicting lung tumor response during IGRT treatment using serial megavoltage CT (MVCT). Methods and Materials: Tumor responses were measured for 20 lung cancer lesions in 17 patients that were imaged and treated with helical tomotherapy with doses ranging from 2.0 to 2.5 Gy per fraction. Five patients were treated with concurrent chemotherapy, and 1 patient was treated with neoadjuvant chemotherapy. Tumor response to treatment was retrospectively measured by contouring 480 serial MVCT images acquired before treatment. A nonparametric, memory-based locally weight regression (LWR) model was developed for predicting tumor response using the retrospective tumor response data. This model predicts future tumor volumes and the associated confidence intervals based on limited observations during the first 2 weeks of treatment. The predictive accuracy of the model was tested using a leave-one-out cross-validation technique with the measured tumor responses. Results: The predictive algorithm was used to compare predicted verse-measured tumor volume response for all 20 lesions. The average error for the predictions of the final tumor volume was 12%, with the true volumes always bounded by the 95% confidence interval. The greatest model uncertainty occurred near the middle of the course of treatment, in which the tumor response relationships were more complex, the model has less information, and the predictors were more varied. The optimal days for measuring the tumor response on the MVCT images were on elapsed Days 1, 2, 5, 9, 11, 12, 17, and 18 during

  8. Seismic stratigraphy and regional unconformity analysis of Chukchi Sea Basins

    Science.gov (United States)

    Agasheva, Mariia; Karpov, Yury; Stoupakova, Antonina; Suslova, Anna

    2017-04-01

    ) progressed from south to north. It indicates the source area was Wrangel Herald arch. Horizon LCU lies on chaotic reflectance sequence of basement in South Chukchi profiles. It is matches to the geological structure in Hope basin Alaska. Cretaceous and Paleogene strata divided by Mid-Brooks unconformity that accompanied with intensive uplift and erosion. Paleogene sequence is characterized by high thickness in North Chukchi basin in comparison with Hanna Trough and North Slope basins. Prograding Paleogene thick clinoform units of various geometries, angular and trajectories are observed in North Chukchi basin. Thick clinoform sequences could be formed as a result of significant subsidence followed by rapid sedimentary influx. This model assumes that North Chukchi basin could be more affected by Cenozoic tectonics of Eurasia Basin rifting. Complementary studies will be connected with careful clinoform types mapping in combination with sequence stratigraphy analyses to identify the depositional environment, source rocks and reservoirs distribution. [1] Moore, T.E., Wallace, W.K., Bird, K.J., Karl, S.M., Mull, C.G. & Dillon, J.T. (1994) Geology of northern Alaska. In:The Geology of Alaska (Ed. by G. Plafker & H.C. Berg), Geol. Soc. Am., Geol.North America, G-1, 49-140.

  9. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  10. T-R Cycle Characterization and Imaging: Advanced Diagnostic Methodology for Petroleum Reservoir and Trap Detection and Delineation

    Energy Technology Data Exchange (ETDEWEB)

    Ernest A. Mancini

    2006-08-30

    Characterization of stratigraphic sequences (T-R cycles or sequences) included outcrop studies, well log analysis and seismic reflection interpretation. These studies were performed by researchers at the University of Alabama, Wichita State University and McGill University. The outcrop, well log and seismic characterization studies were used to develop a depositional sequence model, a T-R cycle (sequence) model, and a sequence stratigraphy predictive model. The sequence stratigraphy predictive model developed in this study is based primarily on the modified T-R cycle (sequence) model. The T-R cycle (sequence) model using transgressive and regressive systems tracts and aggrading, backstepping, and infilling intervals or sections was found to be the most appropriate sequence stratigraphy model for the strata in the onshore interior salt basins of the Gulf of Mexico to improve petroleum stratigraphic trap and specific reservoir facies imaging, detection and delineation. The known petroleum reservoirs of the Mississippi Interior and North Louisiana Salt Basins were classified using T-R cycle (sequence) terminology. The transgressive backstepping reservoirs have been the most productive of oil, and the transgressive backstepping and regressive infilling reservoirs have been the most productive of gas. Exploration strategies were formulated using the sequence stratigraphy predictive model and the classification of the known petroleum reservoirs utilizing T-R cycle (sequence) terminology. The well log signatures and seismic reflector patterns were determined to be distinctive for the aggrading, backstepping and infilling sections of the T-R cycle (sequence) and as such, well log and seismic data are useful for recognizing and defining potential reservoir facies. The use of the sequence stratigraphy predictive model, in combination with the knowledge of how the distinctive characteristics of the T-R system tracts and their subdivisions are expressed in well log patterns

  11. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  12. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  13. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  14. Modeling pitting growth data and predicting degradation trend

    International Nuclear Information System (INIS)

    Viglasky, T.; Awad, R.; Zeng, Z.; Riznic, J.

    2007-01-01

    A non-statistical modeling approach to predict material degradation is presented in this paper. In this approach, the original data series is processed using Accumulated Generating Operation (AGO). With the aid of the AGO which weakens the random fluctuation embedded in the data series, an approximately exponential curve is established. The generated data series described by the exponential curve is then modeled by a differential equation. The coefficients of the differential equation can be deduced by approximate difference formula based on least-squares algorithm. By solving the differential equation and processing an inverse AGO, a predictive model can be obtained. As this approach is not established on the basis of statistics, the prediction can be performed with a limited amount of data. Implementation of this approach is demonstrated by predicting the pitting growth rate in specimens and wear trend in steam generator tubes. The analysis results indicate that this approach provides a powerful tool with reasonable precision to predict material degradation. (author)

  15. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Mathematical model for dissolved oxygen prediction in Cirata ...

    African Journals Online (AJOL)

    This paper presents the implementation and performance of mathematical model to predict theconcentration of dissolved oxygen in Cirata Reservoir, West Java by using Artificial Neural Network (ANN). The simulation program was created using Visual Studio 2012 C# software with ANN model implemented in it. Prediction ...

  17. Geochronology and subsurface stratigraphy of Pukapuka and Rakahanga atolls, Cook Islands: Late Quaternary reef growth and sea level history

    Science.gov (United States)

    Gray, S.C.; Hein, J.R.; Hausmann, R.; Radtke, U.

    1992-01-01

    Eustatic sea-level cycles superposed on thermal subsidence of an atoll produce layers of high sea-level reefs separated by erosional unconformities. Coral samples from these reefs from cores drilled to 50 m beneath the lagoons of Pukapuka and Rakahanga atolls, northern Cook Islands give electron spin resonance (ESR) and U-series ages ranging from the Holocene to 600,000 yr B.P. Subgroups of these ages and the stratigraphic position of their bounding unconformities define at least 5 periods of reef growth and high sea-level (0-9000 yr B.P., 125,000-180,000 yr B.P., 180,000-230,000 yr B.P., 300,000-460,000 yr B.P., 460,000-650,000 yr B.P.). Only two ages fall within error of the last interglacial high sea-level stand (???125,000-135,000 yr B.P.). This paucity of ages may result from extensive erosion of the last intergracial reef. In addition, post-depositional isotope exchange may have altered the time ages of three coral samples to apparent ages that fall within glacial stage 6. For the record to be preserved, vertical accretion during rising sea-level must compensate for surface lowering from erosion during sea-level lowstands and subsidence of the atoll; erosion rates (6-63 cm/1000 yr) can therefore be calculated from reef accretion rates (100-400 cm/1000 yr), subsidence rates (2-6 cm/1000 yr), and the duration of island submergence (8-15% of the last 600,000 yr). The stratigraphy of coral ages indicates island subsidence rates of 4.5 ?? 2.8 cm/1000 yr for both islands. A model of reef growth and erosion based on the stratigraphy of the Cook Islands atolls suggests average subsidence and erosion rates of between 3-6 and 15-20 cm/1000 yr, respectively. ?? 1992.

  18. The stratigraphy and geochemistry of the Klipriviersberg and Platberg groups of the Ventersdorp supergroup in the Klerksdorp area, Western Transvaal

    International Nuclear Information System (INIS)

    Myers, J.M.

    1990-06-01

    The stratigraphy of the Ventersdorp Supergroup in the Klerksdorp area, has been investigated with the aid of a series of boreholes drilled in the northern portion of the Buffelsdoorn Graben. This information was supplemented by a limited amount of surface mapping in the Klerksdorp Townlands and nearby Platberg. For the most part the study concentrates on descriptive lithostratigraphy. However, whole rock geochemical studies on the volcanic formations proved to be a valuable aid for both stratigraphic correlation and process interpretation. The principle objective of the study was to construct a tectono-stratigraphic model for the evolution of the Ventersdorp Supergroup in this area. The boreholes studied revealed the presence of a complete succession of Ventersdorp Supergroup including the Klipriviersberg Group, Platberg Group and the Bothaville and Allanridge Formations. 67 refs., 128 figs., 20 tabs

  19. Risk Prediction Model for Severe Postoperative Complication in Bariatric Surgery.

    Science.gov (United States)

    Stenberg, Erik; Cao, Yang; Szabo, Eva; Näslund, Erik; Näslund, Ingmar; Ottosson, Johan

    2018-01-12

    Factors associated with risk for adverse outcome are important considerations in the preoperative assessment of patients for bariatric surgery. As yet, prediction models based on preoperative risk factors have not been able to predict adverse outcome sufficiently. This study aimed to identify preoperative risk factors and to construct a risk prediction model based on these. Patients who underwent a bariatric surgical procedure in Sweden between 2010 and 2014 were identified from the Scandinavian Obesity Surgery Registry (SOReg). Associations between preoperative potential risk factors and severe postoperative complications were analysed using a logistic regression model. A multivariate model for risk prediction was created and validated in the SOReg for patients who underwent bariatric surgery in Sweden, 2015. Revision surgery (standardized OR 1.19, 95% confidence interval (CI) 1.14-0.24, p prediction model. Despite high specificity, the sensitivity of the model was low. Revision surgery, high age, low BMI, large waist circumference, and dyspepsia/GERD were associated with an increased risk for severe postoperative complication. The prediction model based on these factors, however, had a sensitivity that was too low to predict risk in the individual patient case.

  20. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  1. Compensatory versus noncompensatory models for predicting consumer preferences

    Directory of Open Access Journals (Sweden)

    Anja Dieckmann

    2009-04-01

    Full Text Available Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser and Orlin, 2007; Kohli and Jedidi, 2007 to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.

  2. Prediction models for successful external cephalic version: a systematic review.

    Science.gov (United States)

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein

    2015-12-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.

  3. Predictive QSAR Models for the Toxicity of Disinfection Byproducts

    Directory of Open Access Journals (Sweden)

    Litang Qin

    2017-10-01

    Full Text Available Several hundred disinfection byproducts (DBPs in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure–activity relationship (QSAR models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH−, DNA+ and DNA−. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination (R2 > 0.7, explained variance in leave-one-out prediction (Q2LOO and in leave-many-out prediction (Q2LMO > 0.6, variance explained in external prediction (Q2F1, Q2F2, and Q2F3 > 0.7, and concordance correlation coefficient (CCC > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.

  4. Predictive QSAR Models for the Toxicity of Disinfection Byproducts.

    Science.gov (United States)

    Qin, Litang; Zhang, Xin; Chen, Yuhan; Mo, Lingyun; Zeng, Honghu; Liang, Yanpeng

    2017-10-09

    Several hundred disinfection byproducts (DBPs) in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure-activity relationship (QSAR) models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH-, DNA+ and DNA-. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination ( R ²) > 0.7, explained variance in leave-one-out prediction ( Q ² LOO ) and in leave-many-out prediction ( Q ² LMO ) > 0.6, variance explained in external prediction ( Q ² F1 , Q ² F2 , and Q ² F3 ) > 0.7, and concordance correlation coefficient ( CCC ) > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.

  5. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  6. A predictive pilot model for STOL aircraft landing

    Science.gov (United States)

    Kleinman, D. L.; Killingsworth, W. R.

    1974-01-01

    An optimal control approach has been used to model pilot performance during STOL flare and landing. The model is used to predict pilot landing performance for three STOL configurations, each having a different level of automatic control augmentation. Model predictions are compared with flight simulator data. It is concluded that the model can be effective design tool for studying analytically the effects of display modifications, different stability augmentation systems, and proposed changes in the landing area geometry.

  7. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    Science.gov (United States)

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  8. Comparison of pause predictions of two sequence-dependent transcription models

    International Nuclear Information System (INIS)

    Bai, Lu; Wang, Michelle D

    2010-01-01

    Two recent theoretical models, Bai et al (2004, 2007) and Tadigotla et al (2006), formulated thermodynamic explanations of sequence-dependent transcription pausing by RNA polymerase (RNAP). The two models differ in some basic assumptions and therefore make different yet overlapping predictions for pause locations, and different predictions on pause kinetics and mechanisms. Here we present a comprehensive comparison of the two models. We show that while they have comparable predictive power of pause locations at low NTP concentrations, the Bai et al model is more accurate than Tadigotla et al at higher NTP concentrations. The pausing kinetics predicted by Bai et al is also consistent with time-course transcription reactions, while Tadigotla et al is unsuited for this type of kinetic prediction. More importantly, the two models in general predict different pausing mechanisms even for the same pausing sites, and the Bai et al model provides an explanation more consistent with recent single molecule observations

  9. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    Science.gov (United States)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  10. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  11. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  12. Predictive models for PEM-electrolyzer performance using adaptive neuro-fuzzy inference systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Steffen [University of Tasmania, Hobart 7001, Tasmania (Australia); Karri, Vishy [Australian College of Kuwait (Kuwait)

    2010-09-15

    Predictive models were built using neural network based Adaptive Neuro-Fuzzy Inference Systems for hydrogen flow rate, electrolyzer system-efficiency and stack-efficiency respectively. A comprehensive experimental database forms the foundation for the predictive models. It is argued that, due to the high costs associated with the hydrogen measuring equipment; these reliable predictive models can be implemented as virtual sensors. These models can also be used on-line for monitoring and safety of hydrogen equipment. The quantitative accuracy of the predictive models is appraised using statistical techniques. These mathematical models are found to be reliable predictive tools with an excellent accuracy of {+-}3% compared with experimental values. The predictive nature of these models did not show any significant bias to either over prediction or under prediction. These predictive models, built on a sound mathematical and quantitative basis, can be seen as a step towards establishing hydrogen performance prediction models as generic virtual sensors for wider safety and monitoring applications. (author)

  13. State-space prediction model for chaotic time series

    Science.gov (United States)

    Alparslan, A. K.; Sayar, M.; Atilgan, A. R.

    1998-08-01

    A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.

  14. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  15. Cure modeling in real-time prediction: How much does it help?

    Science.gov (United States)

    Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F

    2017-08-01

    Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  17. Evaluation of burst pressure prediction models for line pipes

    International Nuclear Information System (INIS)

    Zhu, Xian-Kui; Leis, Brian N.

    2012-01-01

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487–492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: ► This paper evaluates different burst pressure prediction models for line pipes. ► The existing models are categorized into two major groups of Tresca and von Mises solutions. ► Prediction quality of each model is assessed statistically using a large full-scale burst test database. ► The Zhu-Leis solution is identified as the best predictive model.

  18. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  19. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  20. Stratigraphy of Neogene deposits in the Khania province, Crete, with special reference to foraminifera of the family Planorbulinidae and the genus Heterostegina

    NARCIS (Netherlands)

    Freudenthal, T.

    1969-01-01

    In this paper the stratigraphy of the Neogene deposits in the Khania Province, Crete, Greece, is described. Special attention is paid to the evolution and taxonomy of foraminiferal genera assigned previously to the family Planorbulinidae. This partial revision of the Planorbulinidae is based not

  1. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  2. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    Science.gov (United States)

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age

  3. Paleontology and stratigraphy of the Upper Triassic Kamishak Formation in the Puale Bay-Cape Kekurnoi-Alinchak Bay area, Karluk C-4 and C-5 quadrangle

    Data.gov (United States)

    Department of the Interior — This report summarizes the paleontological character and stratigraphy of the Kamishak Formation in the Puale Bay–Cape Kekurnoi–Alinchak Bay area, Karluk C-4 and C-5...

  4. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  5. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  6. Auditing predictive models : a case study in crop growth

    NARCIS (Netherlands)

    Metselaar, K.

    1999-01-01

    Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize

  7. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  8. Numerical weather prediction (NWP) and hybrid ARMA/ANN model to predict global radiation

    International Nuclear Information System (INIS)

    Voyant, Cyril; Muselli, Marc; Paoli, Christophe; Nivet, Marie-Laure

    2012-01-01

    We propose in this paper an original technique to predict global radiation using a hybrid ARMA/ANN model and data issued from a numerical weather prediction model (NWP). We particularly look at the multi-layer perceptron (MLP). After optimizing our architecture with NWP and endogenous data previously made stationary and using an innovative pre-input layer selection method, we combined it to an ARMA model from a rule based on the analysis of hourly data series. This model has been used to forecast the hourly global radiation for five places in Mediterranean area. Our technique outperforms classical models for all the places. The nRMSE for our hybrid model MLP/ARMA is 14.9% compared to 26.2% for the naïve persistence predictor. Note that in the standalone ANN case the nRMSE is 18.4%. Finally, in order to discuss the reliability of the forecaster outputs, a complementary study concerning the confidence interval of each prediction is proposed. -- Highlights: ► Time series forecasting with hybrid method based on the use of ALADIN numerical weather model, ANN and ARMA. ► Innovative pre-input layer selection method. ► Combination of optimized MLP and ARMA model obtained from a rule based on the analysis of hourly data series. ► Stationarity process (method and control) for the global radiation time series.

  9. An intermittency model for predicting roughness induced transition

    Science.gov (United States)

    Ge, Xuan; Durbin, Paul

    2014-11-01

    An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.

  10. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  11. Modeling of Complex Life Cycle Prediction Based on Cell Division

    Directory of Open Access Journals (Sweden)

    Fucheng Zhang

    2017-01-01

    Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

  12. Prediction models and control algorithms for predictive applications of setback temperature in cooling systems

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Yoon, Younju; Jeon, Young-Hoon; Kim, Sooyoung

    2017-01-01

    Highlights: • Initial ANN model was developed for predicting the time to the setback temperature. • Initial model was optimized for producing accurate output. • Optimized model proved its prediction accuracy. • ANN-based algorithms were developed and tested their performance. • ANN-based algorithms presented superior thermal comfort or energy efficiency. - Abstract: In this study, a temperature control algorithm was developed to apply a setback temperature predictively for the cooling system of a residential building during occupied periods by residents. An artificial neural network (ANN) model was developed to determine the required time for increasing the current indoor temperature to the setback temperature. This study involved three phases: development of the initial ANN-based prediction model, optimization and testing of the initial model, and development and testing of three control algorithms. The development and performance testing of the model and algorithm were conducted using TRNSYS and MATLAB. Through the development and optimization process, the final ANN model employed indoor temperature and the temperature difference between the current and target setback temperature as two input neurons. The optimal number of hidden layers, number of neurons, learning rate, and moment were determined to be 4, 9, 0.6, and 0.9, respectively. The tangent–sigmoid and pure-linear transfer function was used in the hidden and output neurons, respectively. The ANN model used 100 training data sets with sliding-window method for data management. Levenberg-Marquart training method was employed for model training. The optimized model had a prediction accuracy of 0.9097 root mean square errors when compared with the simulated results. Employing the ANN model, ANN-based algorithms maintained indoor temperatures better within target ranges. Compared to the conventional algorithm, the ANN-based algorithms reduced the duration of time, in which the indoor temperature

  13. Error analysis in predictive modelling demonstrated on mould data.

    Science.gov (United States)

    Baranyi, József; Csernus, Olívia; Beczner, Judit

    2014-01-17

    The purpose of this paper was to develop a predictive model for the effect of temperature and water activity on the growth rate of Aspergillus niger and to determine the sources of the error when the model is used for prediction. Parallel mould growth curves, derived from the same spore batch, were generated and fitted to determine their growth rate. The variances of replicate ln(growth-rate) estimates were used to quantify the experimental variability, inherent to the method of determining the growth rate. The environmental variability was quantified by the variance of the respective means of replicates. The idea is analogous to the "within group" and "between groups" variability concepts of ANOVA procedures. A (secondary) model, with temperature and water activity as explanatory variables, was fitted to the natural logarithm of the growth rates determined by the primary model. The model error and the experimental and environmental errors were ranked according to their contribution to the total error of prediction. Our method can readily be applied to analysing the error structure of predictive models of bacterial growth models, too. © 2013.

  14. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  15. Acute Myocardial Infarction Readmission Risk Prediction Models: A Systematic Review of Model Performance.

    Science.gov (United States)

    Smith, Lauren N; Makam, Anil N; Darden, Douglas; Mayo, Helen; Das, Sandeep R; Halm, Ethan A; Nguyen, Oanh Kieu

    2018-01-01

    Hospitals are subject to federal financial penalties for excessive 30-day hospital readmissions for acute myocardial infarction (AMI). Prospectively identifying patients hospitalized with AMI at high risk for readmission could help prevent 30-day readmissions by enabling targeted interventions. However, the performance of AMI-specific readmission risk prediction models is unknown. We systematically searched the published literature through March 2017 for studies of risk prediction models for 30-day hospital readmission among adults with AMI. We identified 11 studies of 18 unique risk prediction models across diverse settings primarily in the United States, of which 16 models were specific to AMI. The median overall observed all-cause 30-day readmission rate across studies was 16.3% (range, 10.6%-21.0%). Six models were based on administrative data; 4 on electronic health record data; 3 on clinical hospital data; and 5 on cardiac registry data. Models included 7 to 37 predictors, of which demographics, comorbidities, and utilization metrics were the most frequently included domains. Most models, including the Centers for Medicare and Medicaid Services AMI administrative model, had modest discrimination (median C statistic, 0.65; range, 0.53-0.79). Of the 16 reported AMI-specific models, only 8 models were assessed in a validation cohort, limiting generalizability. Observed risk-stratified readmission rates ranged from 3.0% among the lowest-risk individuals to 43.0% among the highest-risk individuals, suggesting good risk stratification across all models. Current AMI-specific readmission risk prediction models have modest predictive ability and uncertain generalizability given methodological limitations. No existing models provide actionable information in real time to enable early identification and risk-stratification of patients with AMI before hospital discharge, a functionality needed to optimize the potential effectiveness of readmission reduction interventions

  16. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...... of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset...

  17. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  18. Domestic appliances energy optimization with model predictive control

    International Nuclear Information System (INIS)

    Rodrigues, E.M.G.; Godina, R.; Pouresmaeil, E.; Ferreira, J.R.; Catalão, J.P.S.

    2017-01-01

    Highlights: • An alternative power management control for home appliances that require thermal regulation is presented. • A Model Predictive Control scheme is assessed and its performance studied and compared to the thermostat. • Problem formulation is explored through tuning weights with the aim of reducing energetic consumption and cost. • A modulation scheme of a two-level Model Predictive Control signal as an interface block is presented. • The implementation costs in home appliances with thermal regulation requirements are reduced. - Abstract: A vital element in making a sustainable world is correctly managing the energy in the domestic sector. Thus, this sector evidently stands as a key one for to be addressed in terms of climate change goals. Increasingly, people are aware of electricity savings by turning off the equipment that is not been used, or connect electrical loads just outside the on-peak hours. However, these few efforts are not enough to reduce the global energy consumption, which is increasing. Much of the reduction was due to technological improvements, however with the advancing of the years new types of control arise. Domestic appliances with the purpose of heating and cooling rely on thermostatic regulation technique. The study in this paper is focused on the subject of an alternative power management control for home appliances that require thermal regulation. In this paper a Model Predictive Control scheme is assessed and its performance studied and compared to the thermostat with the aim of minimizing the cooling energy consumption through the minimization of the energy cost while satisfying the adequate temperature range for the human comfort. In addition, the Model Predictive Control problem formulation is explored through tuning weights with the aim of reducing energetic consumption and cost. For this purpose, the typical consumption of a 24 h period of a summer day was simulated a three-level tariff scheme was used. The new

  19. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  20. K/T boundary stratigraphy: Evidence for multiple impacts and a possible comet stream

    Science.gov (United States)

    Shoemaker, E. M.; Izett, G. A.

    1992-01-01

    A critical set of observations bearing on the K/T boundary events were obtained from several dozen sites in western North America. Thin strata at and adjacent to the K/T boundary are locally preserved in association with coal beds at these sites. The strata were laid down in local shallow basins that were either intermittently flooded or occupied by very shallow ponds. Detailed examination of the stratigraphy at numerous sites led to the recognition of two distinct strata at the boundary. From the time that the two strata were first recognized, E.M. Shoemaker has maintained that they record two impact events. We report some of the evidence that supports this conclusion.

  1. SHMF: Interest Prediction Model with Social Hub Matrix Factorization

    Directory of Open Access Journals (Sweden)

    Chaoyuan Cui

    2017-01-01

    Full Text Available With the development of social networks, microblog has become the major social communication tool. There is a lot of valuable information such as personal preference, public opinion, and marketing in microblog. Consequently, research on user interest prediction in microblog has a positive practical significance. In fact, how to extract information associated with user interest orientation from the constantly updated blog posts is not so easy. Existing prediction approaches based on probabilistic factor analysis use blog posts published by user to predict user interest. However, these methods are not very effective for the users who post less but browse more. In this paper, we propose a new prediction model, which is called SHMF, using social hub matrix factorization. SHMF constructs the interest prediction model by combining the information of blogs posts published by both user and direct neighbors in user’s social hub. Our proposed model predicts user interest by integrating user’s historical behavior and temporal factor as well as user’s friendships, thus achieving accurate forecasts of user’s future interests. The experimental results on Sina Weibo show the efficiency and effectiveness of our proposed model.

  2. Development of Interpretable Predictive Models for BPH and Prostate Cancer.

    Science.gov (United States)

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, J A

    2015-01-01

    Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. Statistical dependence with PC and BPH was found for prostate volume (P-value BPH prediction. PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced.

  3. Modeling a multivariable reactor and on-line model predictive control.

    Science.gov (United States)

    Yu, D W; Yu, D L

    2005-10-01

    A nonlinear first principle model is developed for a laboratory-scaled multivariable chemical reactor rig in this paper and the on-line model predictive control (MPC) is implemented to the rig. The reactor has three variables-temperature, pH, and dissolved oxygen with nonlinear dynamics-and is therefore used as a pilot system for the biochemical industry. A nonlinear discrete-time model is derived for each of the three output variables and their model parameters are estimated from the real data using an adaptive optimization method. The developed model is used in a nonlinear MPC scheme. An accurate multistep-ahead prediction is obtained for MPC, where the extended Kalman filter is used to estimate system unknown states. The on-line control is implemented and a satisfactory tracking performance is achieved. The MPC is compared with three decentralized PID controllers and the advantage of the nonlinear MPC over the PID is clearly shown.

  4. Rim Structure, Stratigraphy, and Aqueous Alteration Exposures Along Opportunity Rover's Traverse of the Noachian Endeavour Crater

    Science.gov (United States)

    Crumpler, L.S.; Arvidson, R. E.; Golombek, M.; Grant, J. A.; Jolliff, B. L.; Mittlefehldt, D. W.

    2017-01-01

    The Mars Exploration Rover Opportunity has traversed 10.2 kilometers along segments of the west rim of the 22-kilometer-diameter Noachian Endeavour impact crater as of sol 4608 (01/09/17). The stratigraphy, attitude of units, lithology, and degradation state of bedrock outcrops exposed on the crater rim have been examined in situ and placed in geologic context. Structures within the rim and differences in physical properties of the identified lithologies have played important roles in localizing outcrops bearing evidence of aqueous alteration.

  5. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  6. eTOXlab, an open source modeling framework for implementing predictive models in production environments.

    Science.gov (United States)

    Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel

    2015-01-01

    Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by e

  7. Real estate value prediction using multivariate regression models

    Science.gov (United States)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  8. A COMPARISON BETWEEN THREE PREDICTIVE MODELS OF COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    DUMITRU CIOBANU

    2013-12-01

    Full Text Available Time series prediction is an open problem and many researchers are trying to find new predictive methods and improvements for the existing ones. Lately methods based on neural networks are used extensively for time series prediction. Also, support vector machines have solved some of the problems faced by neural networks and they began to be widely used for time series prediction. The main drawback of those two methods is that they are global models and in the case of a chaotic time series it is unlikely to find such model. In this paper it is presented a comparison between three predictive from computational intelligence field one based on neural networks one based on support vector machine and another based on chaos theory. We show that the model based on chaos theory is an alternative to the other two methods.

  9. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  10. Complex versus simple models: ion-channel cardiac toxicity prediction.

    Science.gov (United States)

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  11. Complex versus simple models: ion-channel cardiac toxicity prediction

    Directory of Open Access Journals (Sweden)

    Hitesh B. Mistry

    2018-02-01

    Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  12. Tuning SISO offset-free Model Predictive Control based on ARX models

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2012-01-01

    , the proposed controller is simple to tune as it has only one free tuning parameter. These two features are advantageous in predictive process control as they simplify industrial commissioning of MPC. Disturbance rejection and offset-free control is important in industrial process control. To achieve offset......In this paper, we present a tuning methodology for a simple offset-free SISO Model Predictive Controller (MPC) based on autoregressive models with exogenous inputs (ARX models). ARX models simplify system identification as they can be identified from data using convex optimization. Furthermore......-free control in face of unknown disturbances or model-plant mismatch, integrators must be introduced in either the estimator or the regulator. Traditionally, offset-free control is achieved using Brownian disturbance models in the estimator. In this paper we achieve offset-free control by extending the noise...

  13. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  14. Chemical structure-based predictive model for methanogenic anaerobic biodegradation potential.

    Science.gov (United States)

    Meylan, William; Boethling, Robert; Aronson, Dallas; Howard, Philip; Tunkel, Jay

    2007-09-01

    Many screening-level models exist for predicting aerobic biodegradation potential from chemical structure, but anaerobic biodegradation generally has been ignored by modelers. We used a fragment contribution approach to develop a model for predicting biodegradation potential under methanogenic anaerobic conditions. The new model has 37 fragments (substructures) and classifies a substance as either fast or slow, relative to the potential to be biodegraded in the "serum bottle" anaerobic biodegradation screening test (Organization for Economic Cooperation and Development Guideline 311). The model correctly classified 90, 77, and 91% of the chemicals in the training set (n = 169) and two independent validation sets (n = 35 and 23), respectively. Accuracy of predictions of fast and slow degradation was equal for training-set chemicals, but fast-degradation predictions were less accurate than slow-degradation predictions for the validation sets. Analysis of the signs of the fragment coefficients for this and the other (aerobic) Biowin models suggests that in the context of simple group contribution models, the majority of positive and negative structural influences on ultimate degradation are the same for aerobic and methanogenic anaerobic biodegradation.

  15. Short-term wind power prediction based on LSSVM–GSA model

    International Nuclear Information System (INIS)

    Yuan, Xiaohui; Chen, Chen; Yuan, Yanbin; Huang, Yuehua; Tan, Qingxiong

    2015-01-01

    Highlights: • A hybrid model is developed for short-term wind power prediction. • The model is based on LSSVM and gravitational search algorithm. • Gravitational search algorithm is used to optimize parameters of LSSVM. • Effect of different kernel function of LSSVM on wind power prediction is discussed. • Comparative studies show that prediction accuracy of wind power is improved. - Abstract: Wind power forecasting can improve the economical and technical integration of wind energy into the existing electricity grid. Due to its intermittency and randomness, it is hard to forecast wind power accurately. For the purpose of utilizing wind power to the utmost extent, it is very important to make an accurate prediction of the output power of a wind farm under the premise of guaranteeing the security and the stability of the operation of the power system. In this paper, a hybrid model (LSSVM–GSA) based on the least squares support vector machine (LSSVM) and gravitational search algorithm (GSA) is proposed to forecast the short-term wind power. As the kernel function and the related parameters of the LSSVM have a great influence on the performance of the prediction model, the paper establishes LSSVM model based on different kernel functions for short-term wind power prediction. And then an optimal kernel function is determined and the parameters of the LSSVM model are optimized by using GSA. Compared with the Back Propagation (BP) neural network and support vector machine (SVM) model, the simulation results show that the hybrid LSSVM–GSA model based on exponential radial basis kernel function and GSA has higher accuracy for short-term wind power prediction. Therefore, the proposed LSSVM–GSA is a better model for short-term wind power prediction

  16. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  17. Testing process predictions of models of risky choice: a quantitative model comparison approach

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  18. Testing Process Predictions of Models of Risky Choice: A Quantitative Model Comparison Approach

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2013-09-01

    Full Text Available This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or nonlinear functions thereof and the separate evaluation of risky options (expectation models. Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models. We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter, Gigerenzer, & Hertwig, 2006, and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up and direction of search (i.e., gamble-wise vs. reason-wise. In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly; acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988 called similarity. In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  19. Individualized prediction of perineural invasion in colorectal cancer: development and validation of a radiomics prediction model.

    Science.gov (United States)

    Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi

    2018-02-01

    To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.

  20. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  1. Deep Flare Net (DeFN) Model for Solar Flare Prediction

    Science.gov (United States)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Ishii, M.

    2018-05-01

    We developed a solar flare prediction model using a deep neural network (DNN) named Deep Flare Net (DeFN). This model can calculate the probability of flares occurring in the following 24 hr in each active region, which is used to determine the most likely maximum classes of flares via a binary classification (e.g., ≥M class versus statistically predict flares, the DeFN model was trained to optimize the skill score, i.e., the true skill statistic (TSS). As a result, we succeeded in predicting flares with TSS = 0.80 for ≥M-class flares and TSS = 0.63 for ≥C-class flares. Note that in usual DNN models, the prediction process is a black box. However, in the DeFN model, the features are manually selected, and it is possible to analyze which features are effective for prediction after evaluation.

  2. Hidden markov model for the prediction of transmembrane proteins using MATLAB.

    Science.gov (United States)

    Chaturvedi, Navaneet; Shanker, Sudhanshu; Singh, Vinay Kumar; Sinha, Dhiraj; Pandey, Paras Nath

    2011-01-01

    Since membranous proteins play a key role in drug targeting therefore transmembrane proteins prediction is active and challenging area of biological sciences. Location based prediction of transmembrane proteins are significant for functional annotation of protein sequences. Hidden markov model based method was widely applied for transmembrane topology prediction. Here we have presented a revised and a better understanding model than an existing one for transmembrane protein prediction. Scripting on MATLAB was built and compiled for parameter estimation of model and applied this model on amino acid sequence to know the transmembrane and its adjacent locations. Estimated model of transmembrane topology was based on TMHMM model architecture. Only 7 super states are defined in the given dataset, which were converted to 96 states on the basis of their length in sequence. Accuracy of the prediction of model was observed about 74 %, is a good enough in the area of transmembrane topology prediction. Therefore we have concluded the hidden markov model plays crucial role in transmembrane helices prediction on MATLAB platform and it could also be useful for drug discovery strategy. The database is available for free at bioinfonavneet@gmail.comvinaysingh@bhu.ac.in.

  3. 'Combined reflectance stratigraphy' - subdivision of loess successions by diffuse reflectance spectrometry (DRS)

    Science.gov (United States)

    Szeberényi, Jozsef; Bradak-Hayashi, Balázs; Kiss, Klaudia; Kovács, József; Varga, György; Balázs, Réka; Szalai, Zoltán; Viczián, István

    2016-04-01

    The different varieties of loess (and intercalated paleosol layers) together constitute one of the most widespread terrestrial sediments, which was deposited, altered, and redeposited in the course of the changing climatic conditions of the Pleistocene. To reveal more information about Pleistocene climate cycles and/or environments the detailed lithostratigraphical subdivision and classification of the loess variations and paleosols are necessary. Beside the numerous method such as various field measurements, semi-quantitative tests and laboratory investigations, diffuse reflectance spectroscopy (DRS) is one of the well applied method on loess/paleosol sequences. Generally, DRS has been used to separate the detrital and pedogenic mineral component of the loess sections by the hematite/goethite ratio. DRS also has been applied as a joint method of various environmental magnetic investigations such as magnetic susceptibility- and isothermal remanent magnetization measurements. In our study the so-called "combined reflectance stratigraphy method" were developed. At First, complex mathematical method was applied to compare the results of the spectral reflectance measurements. One of the most preferred multivariate methods is cluster analysis. Its scope is to group and compare the loess variations and paleosol based on the similarity and common properties of their reflectance curves. In the Second, beside the basic subdivision of the profiles by the different reflectance curves of the layers, the most characteristic wavelength section of the reflectance curve was determined. This sections played the most important role during the classification of the different materials of the section. The reflectance value of individual samples, belonged to the characteristic wavelength were depicted in the function of depth and well correlated with other proxies like grain size distribution and magnetic susceptibility data. The results of the correlation showed the significance of

  4. Seismic stratigraphy and late Quaternary shelf history, south-central Monterey Bay, California

    Science.gov (United States)

    Chin, J.L.; Clifton, H.E.; Mullins, H.T.

    1988-01-01

    The south-central Monterey Bay shelf is a high-energy, wave-dominated, tectonically active coastal region on the central California continental margin. A prominent feature of this shelf is a sediment lobe off the mouth of the Salinas River that has surface expression. High-resolution seismic-reflection profiles reveal that an angular unconformity (Quaternary?) underlies the entire shelf and separates undeformed strata above it from deformed strata below it. The Salinas River lobe is a convex bulge on the shelf covering an area of approximately 72 km2 in water depths from 10 to 90 m. It reaches a maximum thickness of 35 m about 2.5 km seaward of the river mouth and thins in all directions away from this point. Adjacent shelf areas are characterized by only a thin (2 to 5 m thick) and uniform veneer of sediment. Acoustic stratigraphy of the lobe is complex and is characterized by at least three unconformity-bounded depositional sequences. Acoustically, these sequences are relatively well bedded. Acoustic foresets occur within the intermediate sequence and dip seaward at 0.7?? to 2.0??. Comparison with sedimentary sequences in uplifted onshore Pleistocene marine-terrace deposits of the Monterey Bay area, which were presumably formed in a similar setting under similar processes, suggests that a general interpretation can be formulated for seismic stratigraphic patterns. Depositional sequences are interpreted to represent shallowing-upwards progradational sequences of marine to nonmarine coastal deposits formed during interglacial highstands and/or during early stages of falling sea level. Acoustic foresets within the intermediate sequence are evidence of seaward progradation. Acoustic unconformities that separate depositional sequences are interpreted as having formed largely by shoreface planation and may be the only record of the intervening transgressions. The internal stratigraphy of the Salinas River lobe thus suggests that at least several late Quaternary

  5. High-Resolution Subsurface Imaging and Stratigraphy of Quaternary Deposits, Marapanim Estuary, Northern Brazil

    Science.gov (United States)

    Silva, C. A.; Souza Filho, P. M.; Gouvea Luiz, J.

    2007-05-01

    The Marapanim estuary is situated in the Para Coastal Plain, North Brazil. It is characterized by an embayed coastline developed on Neogene and Quaternary sediments of the Barreiras and Pos-Barreiras Group. This system is strongly influenced by macrotidal regimes with semidiurnal tides and by humid tropical climate conditions. The interpretation of GPR-reflections presented in this paper is based on correlation of the GPR signal with stratigraphic data acquired on the coastal plain through five cores that were taken along GPR survey lines from the recent deposits and outcrops observed along to the coastal area. The profiles were obtained using a Geophysical Survey Systems Inc., Model YR-2 GPR, with monostatic 700 MHz antenna that permitted to get records of subsurface deposits at 20m depth. Were collected 54 radar sections completing a total of 4.360m. The field data were analyzed using a RADAN software and applying different filters. The interpretation of radar facies following the principles of seismic stratigraphy that permitted analyze the sedimentary facies and facies architecture in order to understand the lithology, depositional environments and stratigraphic evolution of this sedimentary succession as well as to leading to a more precise stratigraphic framework for the Neogene to Quaternary deposits at Marapanim coastal plain. Facies characteristics and sedimentologic analysis (i.e., texture, composition and structure aspects) were investigated from five cores collected through a Rammkernsonde system. The locations were determined using a Global Positioning System. Remote sensing images (Landsat-7 ETM+ and RADARSAT-1 Wide) and SRTM elevation data were used to identify and define the distribution of the different morphologic units. The Coastal Plain extends west-east of the mouth of the Marapanim River, where were identified six morphologic units: paleodune, strand plain, recent coastal dune, macrotidal sandy beach, mangrove and salt marsh. The integration

  6. Bayesian Age-Period-Cohort Modeling and Prediction - BAMP

    Directory of Open Access Journals (Sweden)

    Volker J. Schmid

    2007-10-01

    Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.

  7. Model predictive control of a wind turbine modelled in Simpack

    International Nuclear Information System (INIS)

    Jassmann, U; Matzke, D; Reiter, M; Abel, D; Berroth, J; Schelenz, R; Jacobs, G

    2014-01-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine

  8. Model predictive control of a wind turbine modelled in Simpack

    Science.gov (United States)

    Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.

    2014-06-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to

  9. The North American Multi-Model Ensemble (NMME): Phase-1 Seasonal to Interannual Prediction, Phase-2 Toward Developing Intra-Seasonal Prediction

    Science.gov (United States)

    Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily; hide

    2013-01-01

    The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.

  10. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  11. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  12. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  13. Model Predictive Control of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Andersen, Palle; Pedersen, Tom Søndergård; Nielsen, Kirsten Mølgaard

    2015-01-01

    In this paper reactive control and Model Predictive Control (MPC) for a Wave Energy Converter (WEC) are compared. The analysis is based on a WEC from Wave Star A/S designed as a point absorber. The model predictive controller uses wave models based on the dominating sea states combined with a model...... connecting undisturbed wave sequences to sequences of torque. Losses in the conversion from mechanical to electrical power are taken into account in two ways. Conventional reactive controllers are tuned for each sea state with the assumption that the converter has the same efficiency back and forth. MPC...

  14. Three-model ensemble wind prediction in southern Italy

    Science.gov (United States)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  15. QSAR Modeling and Prediction of Drug-Drug Interactions.

    Science.gov (United States)

    Zakharov, Alexey V; Varlamova, Ekaterina V; Lagunin, Alexey A; Dmitriev, Alexander V; Muratov, Eugene N; Fourches, Denis; Kuz'min, Victor E; Poroikov, Vladimir V; Tropsha, Alexander; Nicklaus, Marc C

    2016-02-01

    Severe adverse drug reactions (ADRs) are the fourth leading cause of fatality in the U.S. with more than 100,000 deaths per year. As up to 30% of all ADRs are believed to be caused by drug-drug interactions (DDIs), typically mediated by cytochrome P450s, possibilities to predict DDIs from existing knowledge are important. We collected data from public sources on 1485, 2628, 4371, and 27,966 possible DDIs mediated by four cytochrome P450 isoforms 1A2, 2C9, 2D6, and 3A4 for 55, 73, 94, and 237 drugs, respectively. For each of these data sets, we developed and validated QSAR models for the prediction of DDIs. As a unique feature of our approach, the interacting drug pairs were represented as binary chemical mixtures in a 1:1 ratio. We used two types of chemical descriptors: quantitative neighborhoods of atoms (QNA) and simplex descriptors. Radial basis functions with self-consistent regression (RBF-SCR) and random forest (RF) were utilized to build QSAR models predicting the likelihood of DDIs for any pair of drug molecules. Our models showed balanced accuracy of 72-79% for the external test sets with a coverage of 81.36-100% when a conservative threshold for the model's applicability domain was applied. We generated virtually all possible binary combinations of marketed drugs and employed our models to identify drug pairs predicted to be instances of DDI. More than 4500 of these predicted DDIs that were not found in our training sets were confirmed by data from the DrugBank database.

  16. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  17. The effects of model and data complexity on predictions from species distributions models

    DEFF Research Database (Denmark)

    García-Callejas, David; Bastos, Miguel

    2016-01-01

    How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

  18. Predictive modelling using neuroimaging data in the presence of confounds.

    Science.gov (United States)

    Rao, Anil; Monteiro, Joao M; Mourao-Miranda, Janaina

    2017-04-15

    When training predictive models from neuroimaging data, we typically have available non-imaging variables such as age and gender that affect the imaging data but which we may be uninterested in from a clinical perspective. Such variables are commonly referred to as 'confounds'. In this work, we firstly give a working definition for confound in the context of training predictive models from samples of neuroimaging data. We define a confound as a variable which affects the imaging data and has an association with the target variable in the sample that differs from that in the population-of-interest, i.e., the population over which we intend to apply the estimated predictive model. The focus of this paper is the scenario in which the confound and target variable are independent in the population-of-interest, but the training sample is biased due to a sample association between the target and confound. We then discuss standard approaches for dealing with confounds in predictive modelling such as image adjustment and including the confound as a predictor, before deriving and motivating an Instance Weighting scheme that attempts to account for confounds by focusing model training so that it is optimal for the population-of-interest. We evaluate the standard approaches and Instance Weighting in two regression problems with neuroimaging data in which we train models in the presence of confounding, and predict samples that are representative of the population-of-interest. For comparison, these models are also evaluated when there is no confounding present. In the first experiment we predict the MMSE score using structural MRI from the ADNI database with gender as the confound, while in the second we predict age using structural MRI from the IXI database with acquisition site as the confound. Considered over both datasets we find that none of the methods for dealing with confounding gives more accurate predictions than a baseline model which ignores confounding, although

  19. A deep auto-encoder model for gene expression prediction.

    Science.gov (United States)

    Xie, Rui; Wen, Jia; Quitadamo, Andrew; Cheng, Jianlin; Shi, Xinghua

    2017-11-17

    Gene expression is a key intermediate level that genotypes lead to a particular trait. Gene expression is affected by various factors including genotypes of genetic variants. With an aim of delineating the genetic impact on gene expression, we build a deep auto-encoder model to assess how good genetic variants will contribute to gene expression changes. This new deep learning model is a regression-based predictive model based on the MultiLayer Perceptron and Stacked Denoising Auto-encoder (MLP-SAE). The model is trained using a stacked denoising auto-encoder for feature selection and a multilayer perceptron framework for backpropagation. We further improve the model by introducing dropout to prevent overfitting and improve performance. To demonstrate the usage of this model, we apply MLP-SAE to a real genomic datasets with genotypes and gene expression profiles measured in yeast. Our results show that the MLP-SAE model with dropout outperforms other models including Lasso, Random Forests and the MLP-SAE model without dropout. Using the MLP-SAE model with dropout, we show that gene expression quantifications predicted by the model solely based on genotypes, align well with true gene expression patterns. We provide a deep auto-encoder model for predicting gene expression from SNP genotypes. This study demonstrates that deep learning is appropriate for tackling another genomic problem, i.e., building predictive models to understand genotypes' contribution to gene expression. With the emerging availability of richer genomic data, we anticipate that deep learning models play a bigger role in modeling and interpreting genomics.

  20. The Pindiro Group (Triassic to Early Jurassic Mandawa Basin, southern coastal Tanzania): Definition, palaeoenvironment, and stratigraphy

    Science.gov (United States)

    Hudson, W. E.; Nicholas, C. J.

    2014-04-01

    This paper defines the Pindiro Group of the Mandawa Basin, southern coastal Tanzania based on studies conducted between 2006 and 2009 with the objective of understanding the evolution of this basin. This work draws upon field data, hydrocarbon exploration data, unconventional literature, and the scant published materials available. The paper focuses on the evolution, depositional environments, and definition of the lowermost sedimentary package, which overlies unconformably the metamorphic basement of Precambrian age. The package is described here as the Pindiro Group and it forms the basal group of the Mandawa Basin stratigraphy.

  1. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    CR cultural resource CRM cultural resource management CRPM Cultural Resource Predictive Modeling DoD Department of Defense ESTCP Environmental...resource management ( CRM ) legal obligations under NEPA and the NHPA, military installations need to demonstrate that CRM decisions are based on objective...maxim “one size does not fit all,” and demonstrate that DoD installations have many different CRM needs that can and should be met through a variety

  2. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  3. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  4. Longitudinal modeling to predict vital capacity in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Jahandideh, Samad; Taylor, Albert A; Beaulieu, Danielle; Keymer, Mike; Meng, Lisa; Bian, Amy; Atassi, Nazem; Andrews, Jinsy; Ennist, David L

    2018-05-01

    Death in amyotrophic lateral sclerosis (ALS) patients is related to respiratory failure, which is assessed in clinical settings by measuring vital capacity. We developed ALS-VC, a modeling tool for longitudinal prediction of vital capacity in ALS patients. A gradient boosting machine (GBM) model was trained using the PRO-ACT (Pooled Resource Open-access ALS Clinical Trials) database of over 10,000 ALS patient records. We hypothesized that a reliable vital capacity predictive model could be developed using PRO-ACT. The model was used to compare FVC predictions with a 30-day run-in period to predictions made from just baseline. The internal root mean square deviations (RMSD) of the run-in and baseline models were 0.534 and 0.539, respectively, across the 7L FVC range captured in PRO-ACT. The RMSDs of the run-in and baseline models using an unrelated, contemporary external validation dataset (0.553 and 0.538, respectively) were comparable to the internal validation. The model was shown to have similar accuracy for predicting SVC (RMSD = 0.562). The most important features for both run-in and baseline models were "Baseline forced vital capacity" and "Days since baseline." We developed ALS-VC, a GBM model trained with the PRO-ACT ALS dataset that provides vital capacity predictions generalizable to external datasets. The ALS-VC model could be helpful in advising and counseling patients, and, in clinical trials, it could be used to generate virtual control arms against which observed outcomes could be compared, or used to stratify patients into slowly, average, and rapidly progressing subgroups.

  5. Preoperative prediction model of outcome after cholecystectomy for symptomatic gallstones

    DEFF Research Database (Denmark)

    Borly, L; Anderson, I B; Bardram, L

    1999-01-01

    and sonography evaluated gallbladder motility, gallstones, and gallbladder volume. Preoperative variables in patients with or without postcholecystectomy pain were compared statistically, and significant variables were combined in a logistic regression model to predict the postoperative outcome. RESULTS: Eighty...... and by the absence of 'agonizing' pain and of symptoms coinciding with pain (P model 15 of 18 predicted patients had postoperative pain (PVpos = 0.83). Of 62 patients predicted as having no pain postoperatively, 56 were pain-free (PVneg = 0.90). Overall accuracy...... was 89%. CONCLUSION: From this prospective study a model based on preoperative symptoms was developed to predict postcholecystectomy pain. Since intrastudy reclassification may give too optimistic results, the model should be validated in future studies....

  6. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  7. Including model uncertainty in the model predictive control with output feedback

    Directory of Open Access Journals (Sweden)

    Rodrigues M.A.

    2002-01-01

    Full Text Available This paper addresses the development of an efficient numerical output feedback robust model predictive controller for open-loop stable systems. Stability of the closed loop is guaranteed by using an infinite horizon predictive controller and a stable state observer. The performance and the computational burden of this approach are compared to a robust predictive controller from the literature. The case used for this study is based on an industrial gasoline debutanizer column.

  8. Global vegetation change predicted by the modified Budyko model

    Energy Technology Data Exchange (ETDEWEB)

    Monserud, R.A.; Tchebakova, N.M.; Leemans, R. (US Department of Agriculture, Moscow, ID (United States). Intermountain Research Station, Forest Service)

    1993-09-01

    A modified Budyko global vegetation model is used to predict changes in global vegetation patterns resulting from climate change (CO[sub 2] doubling). Vegetation patterns are predicted using a model based on a dryness index and potential evaporation determined by solving radiation balance equations. Climate change scenarios are derived from predictions from four General Circulation Models (GCM's) of the atmosphere (GFDL, GISS, OSU, and UKMO). All four GCM scenarios show similar trends in vegetation shifts and in areas that remain stable, although the UKMO scenario predicts greater warming than the others. Climate change maps produced by all four GCM scenarios show good agreement with the current climate vegetation map for the globe as a whole, although over half of the vegetation classes show only poor to fair agreement. The most stable areas are Desert and Ice/Polar Desert. Because most of the predicted warming is concentrated in the Boreal and Temperate zones, vegetation there is predicted to undergo the greatest change. Most vegetation classes in the Subtropics and Tropics are predicted to expand. Any shift in the Tropics favouring either Forest over Savanna, or vice versa, will be determined by the magnitude of the increased precipitation accompanying global warming. Although the model predicts equilibrium conditions to which many plant species cannot adjust (through migration or microevolution) in the 50-100 y needed for CO[sub 2] doubling, it is not clear if projected global warming will result in drastic or benign vegetation change. 72 refs., 3 figs., 3 tabs.

  9. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  10. Predicting birth weight with conditionally linear transformation models.

    Science.gov (United States)

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  11. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  12. Discrete fracture modelling for the Stripa tracer validation experiment predictions

    International Nuclear Information System (INIS)

    Dershowitz, W.; Wallmann, P.

    1992-02-01

    Groundwater flow and transport through three-dimensional networks of discrete fractures was modeled to predict the recovery of tracer from tracer injection experiments conducted during phase 3 of the Stripa site characterization and validation protect. Predictions were made on the basis of an updated version of the site scale discrete fracture conceptual model used for flow predictions and preliminary transport modelling. In this model, individual fractures were treated as stochastic features described by probability distributions of geometric and hydrologic properties. Fractures were divided into three populations: Fractures in fracture zones near the drift, non-fracture zone fractures within 31 m of the drift, and fractures in fracture zones over 31 meters from the drift axis. Fractures outside fracture zones are not modelled beyond 31 meters from the drift axis. Transport predictions were produced using the FracMan discrete fracture modelling package for each of five tracer experiments. Output was produced in the seven formats specified by the Stripa task force on fracture flow modelling. (au)

  13. Multivariate statistical models for disruption prediction at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Aledda, R.; Cannas, B.; Fanni, A.; Sias, G.; Pautasso, G.

    2013-01-01

    In this paper, a disruption prediction system for ASDEX Upgrade has been proposed that does not require disruption terminated experiments to be implemented. The system consists of a data-based model, which is built using only few input signals coming from successfully terminated pulses. A fault detection and isolation approach has been used, where the prediction is based on the analysis of the residuals of an auto regressive exogenous input model. The prediction performance of the proposed system is encouraging when it is applied to the same set of campaigns used to implement the model. However, the false alarms significantly increase when we tested the system on discharges coming from experimental campaigns temporally far from those used to train the model. This is due to the well know aging effect inherent in the data-based models. The main advantage of the proposed method, with respect to other data-based approaches in literature, is that it does not need data on experiments terminated with a disruption, as it uses a normal operating conditions model. This is a big advantage in the prospective of a prediction system for ITER, where a limited number of disruptions can be allowed

  14. Modelling Chemical Reasoning to Predict and Invent Reactions.

    Science.gov (United States)

    Segler, Marwin H S; Waller, Mark P

    2017-05-02

    The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Science.gov (United States)

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  16. Verification of some numerical models for operationally predicting mesoscale winds aloft

    International Nuclear Information System (INIS)

    Cornett, J.S.; Randerson, D.

    1977-01-01

    Four numerical models are described for predicting mesoscale winds aloft for a 6 h period. These models are all tested statistically against persistence as the control forecast and against predictions made by operational forecasters. Mesoscale winds aloft data were used to initialize the models and to verify the predictions on an hourly basis. The model yielding the smallest root-mean-square vector errors (RMSVE's) was the one based on the most physics which included advection, ageostrophic acceleration, vertical mixing and friction. Horizontal advection was found to be the most important term in reducing the RMSVE's followed by ageostrophic acceleration, vertical advection, surface friction and vertical mixing. From a comparison of the mean absolute errors based on up to 72 independent wind-profile predictions made by operational forecasters, by the most complete model, and by persistence, we conclude that the model is the best wind predictor in the free air. In the boundary layer, the results tend to favor the forecaster for direction predictions. The speed predictions showed no overall superiority in any of these three models

  17. Modeling and prediction of Turkey's electricity consumption using Support Vector Regression

    International Nuclear Information System (INIS)

    Kavaklioglu, Kadir

    2011-01-01

    Support Vector Regression (SVR) methodology is used to model and predict Turkey's electricity consumption. Among various SVR formalisms, ε-SVR method was used since the training pattern set was relatively small. Electricity consumption is modeled as a function of socio-economic indicators such as population, Gross National Product, imports and exports. In order to facilitate future predictions of electricity consumption, a separate SVR model was created for each of the input variables using their current and past values; and these models were combined to yield consumption prediction values. A grid search for the model parameters was performed to find the best ε-SVR model for each variable based on Root Mean Square Error. Electricity consumption of Turkey is predicted until 2026 using data from 1975 to 2006. The results show that electricity consumption can be modeled using Support Vector Regression and the models can be used to predict future electricity consumption. (author)

  18. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  19. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  20. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  1. Ground Motion Prediction Model Using Artificial Neural Network

    Science.gov (United States)

    Dhanya, J.; Raghukanth, S. T. G.

    2018-03-01

    This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.

  2. A novel Bayesian hierarchical model for road safety hotspot prediction.

    Science.gov (United States)

    Fawcett, Lee; Thorpe, Neil; Matthews, Joseph; Kremer, Karsten

    2017-02-01

    In this paper, we propose a Bayesian hierarchical model for predicting accident counts in future years at sites within a pool of potential road safety hotspots. The aim is to inform road safety practitioners of the location of likely future hotspots to enable a proactive, rather than reactive, approach to road safety scheme implementation. A feature of our model is the ability to rank sites according to their potential to exceed, in some future time period, a threshold accident count which may be used as a criterion for scheme implementation. Our model specification enables the classical empirical Bayes formulation - commonly used in before-and-after studies, wherein accident counts from a single before period are used to estimate counterfactual counts in the after period - to be extended to incorporate counts from multiple time periods. This allows site-specific variations in historical accident counts (e.g. locally-observed trends) to offset estimates of safety generated by a global accident prediction model (APM), which itself is used to help account for the effects of global trend and regression-to-mean (RTM). The Bayesian posterior predictive distribution is exploited to formulate predictions and to properly quantify our uncertainty in these predictions. The main contributions of our model include (i) the ability to allow accident counts from multiple time-points to inform predictions, with counts in more recent years lending more weight to predictions than counts from time-points further in the past; (ii) where appropriate, the ability to offset global estimates of trend by variations in accident counts observed locally, at a site-specific level; and (iii) the ability to account for unknown/unobserved site-specific factors which may affect accident counts. We illustrate our model with an application to accident counts at 734 potential hotspots in the German city of Halle; we also propose some simple diagnostics to validate the predictive capability of our

  3. A neighborhood statistics model for predicting stream pathogen indicator levels.

    Science.gov (United States)

    Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S

    2015-03-01

    Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.

  4. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    Science.gov (United States)

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Outcome Prediction in Mathematical Models of Immune Response to Infection.

    Directory of Open Access Journals (Sweden)

    Manuel Mai

    Full Text Available Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.

  6. Fuzzy model predictive control algorithm applied in nuclear power plant

    International Nuclear Information System (INIS)

    Zuheir, Ahmad

    2006-01-01

    The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

  7. Probability-based collaborative filtering model for predicting gene-disease associations.

    Science.gov (United States)

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-12-28

    Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.

  8. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  9. Catalytic cracking models developed for predictive control purposes

    Directory of Open Access Journals (Sweden)

    Dag Ljungqvist

    1993-04-01

    Full Text Available The paper deals with state-space modeling issues in the context of model-predictive control, with application to catalytic cracking. Emphasis is placed on model establishment, verification and online adjustment. Both the Fluid Catalytic Cracking (FCC and the Residual Catalytic Cracking (RCC units are discussed. Catalytic cracking units involve complex interactive processes which are difficult to operate and control in an economically optimal way. The strong nonlinearities of the FCC process mean that the control calculation should be based on a nonlinear model with the relevant constraints included. However, the model can be simple compared to the complexity of the catalytic cracking plant. Model validity is ensured by a robust online model adjustment strategy. Model-predictive control schemes based on linear convolution models have been successfully applied to the supervisory dynamic control of catalytic cracking units, and the control can be further improved by the SSPC scheme.

  10. Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

    OpenAIRE

    Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos

    2016-01-01

    At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model's behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to ma...

  11. Factors Influencing the Predictive Power of Models for Predicting Mortality and/or Heart Failure Hospitalization in Patients With Heart Failure

    NARCIS (Netherlands)

    Ouwerkerk, Wouter; Voors, Adriaan A.; Zwinderman, Aeilko H.

    2014-01-01

    The present paper systematically reviews and compares existing prediction models in order to establish the strongest variables, models, and model characteristics in patients with heart failure predicting outcome. To improve decision making accurately predicting mortality and heart-failure

  12. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    Science.gov (United States)

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones.

  13. The prediction of epidemics through mathematical modeling.

    Science.gov (United States)

    Schaus, Catherine

    2014-01-01

    Mathematical models may be resorted to in an endeavor to predict the development of epidemics. The SIR model is one of the applications. Still too approximate, the use of statistics awaits more data in order to come closer to reality.

  14. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  15. A prediction model for assessing residential radon concentration in Switzerland

    International Nuclear Information System (INIS)

    Hauri, Dimitri D.; Huss, Anke; Zimmermann, Frank; Kuehni, Claudia E.; Röösli, Martin

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th–90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40–111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69–215 Bq/m³) in the medium category, and 219 Bq/m³ (108–427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be

  16. Optimizing Blasting’s Air Overpressure Prediction Model using Swarm Intelligence

    Science.gov (United States)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Air overpressure (AOp) resulting from blasting can cause damage and nuisance to nearby civilians. Thus, it is important to be able to predict AOp accurately. In this study, 8 different Artificial Neural Network (ANN) were developed for the purpose of prediction of AOp. The ANN models were trained using different variants of Particle Swarm Optimization (PSO) algorithm. AOp predictions were also made using an empirical equation, as suggested by United States Bureau of Mines (USBM), to serve as a benchmark. In order to develop the models, 76 blasting operations in Hulu Langat were investigated. All the ANN models were found to outperform the USBM equation in three performance metrics; root mean square error (RMSE), mean absolute percentage error (MAPE) and coefficient of determination (R2). Using a performance ranking method, MSO-Rand-Mut was determined to be the best prediction model for AOp with a performance metric of RMSE=2.18, MAPE=1.73% and R2=0.97. The result shows that ANN models trained using PSO are capable of predicting AOp with great accuracy.

  17. Predicting carcinogenicity of diverse chemicals using probabilistic neural network modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India); Gupta, Shikha; Rai, Premanjali [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India)

    2013-10-15

    Robust global models capable of discriminating positive and non-positive carcinogens; and predicting carcinogenic potency of chemicals in rodents were developed. The dataset of 834 structurally diverse chemicals extracted from Carcinogenic Potency Database (CPDB) was used which contained 466 positive and 368 non-positive carcinogens. Twelve non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals and nonlinearity in the data were evaluated using Tanimoto similarity index and Brock–Dechert–Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) models were constructed for classification and function optimization problems using the carcinogenicity end point in rat. Validation of the models was performed using the internal and external procedures employing a wide series of statistical checks. PNN constructed using five descriptors rendered classification accuracy of 92.09% in complete rat data. The PNN model rendered classification accuracies of 91.77%, 80.70% and 92.08% in mouse, hamster and pesticide data, respectively. The GRNN constructed with nine descriptors yielded correlation coefficient of 0.896 between the measured and predicted carcinogenic potency with mean squared error (MSE) of 0.44 in complete rat data. The rat carcinogenicity model (GRNN) applied to the mouse and hamster data yielded correlation coefficient and MSE of 0.758, 0.71 and 0.760, 0.46, respectively. The results suggest for wide applicability of the inter-species models in predicting carcinogenic potency of chemicals. Both the PNN and GRNN (inter-species) models constructed here can be useful tools in predicting the carcinogenicity of new chemicals for regulatory purposes. - Graphical abstract: Figure (a) shows classification accuracies (positive and non-positive carcinogens) in rat, mouse, hamster, and pesticide data yielded by optimal PNN model. Figure (b) shows generalization and predictive

  18. Aero-acoustic noise of wind turbines. Noise prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Maribo Pedersen, B. [ed.

    1997-12-31

    Semi-empirical and CAA (Computational AeroAcoustics) noise prediction techniques are the subject of this expert meeting. The meeting presents and discusses models and methods. The meeting may provide answers to the following questions: What Noise sources are the most important? How are the sources best modeled? What needs to be done to do better predictions? Does it boil down to correct prediction of the unsteady aerodynamics around the rotor? Or is the difficult part to convert the aerodynamics into acoustics? (LN)

  19. Predictive assessment of models for dynamic functional connectivity

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Schmidt, Mikkel Nørgaard; Madsen, Kristoffer Hougaard

    2018-01-01

    represent functional brain networks as a meta-stable process with a discrete number of states; however, there is a lack of consensus on how to perform model selection and learn the number of states, as well as a lack of understanding of how different modeling assumptions influence the estimated state......In neuroimaging, it has become evident that models of dynamic functional connectivity (dFC), which characterize how intrinsic brain organization changes over time, can provide a more detailed representation of brain function than traditional static analyses. Many dFC models in the literature...... dynamics. To address these issues, we consider a predictive likelihood approach to model assessment, where models are evaluated based on their predictive performance on held-out test data. Examining several prominent models of dFC (in their probabilistic formulations) we demonstrate our framework...

  20. Predicting turns in proteins with a unified model.

    Directory of Open Access Journals (Sweden)

    Qi Song

    Full Text Available MOTIVATION: Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. RESULTS: In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i using newly exploited features of structural evolution information (secondary structure and shape string of protein based on structure homologies, (ii considering all types of turns in a unified model, and (iii practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications.

  1. Modeling Seizure Self-Prediction: An E-Diary Study

    Science.gov (United States)

    Haut, Sheryl R.; Hall, Charles B.; Borkowski, Thomas; Tennen, Howard; Lipton, Richard B.

    2013-01-01

    Purpose A subset of patients with epilepsy successfully self-predicted seizures in a paper diary study. We conducted an e-diary study to ensure that prediction precedes seizures, and to characterize the prodromal features and time windows that underlie self-prediction. Methods Subjects 18 or older with LRE and ≥3 seizures/month maintained an e-diary, reporting AM/PM data daily, including mood, premonitory symptoms, and all seizures. Self-prediction was rated by, “How likely are you to experience a seizure [time frame]”? Five choices ranged from almost certain (>95% chance) to very unlikely. Relative odds of seizure (OR) within time frames was examined using Poisson models with log normal random effects to adjust for multiple observations. Key Findings Nineteen subjects reported 244 eligible seizures. OR for prediction choices within 6hrs was as high as 9.31 (1.92,45.23) for “almost certain”. Prediction was most robust within 6hrs of diary entry, and remained significant up to 12hrs. For 9 best predictors, average sensitivity was 50%. Older age contributed to successful self-prediction, and self-prediction appeared to be driven by mood and premonitory symptoms. In multivariate modeling of seizure occurrence, self-prediction (2.84; 1.68,4.81), favorable change in mood (0.82; 0.67,0.99) and number of premonitory symptoms (1,11; 1.00,1.24) were significant. Significance Some persons with epilepsy can self-predict seizures. In these individuals, the odds of a seizure following a positive prediction are high. Predictions were robust, not attributable to recall bias, and were related to self awareness of mood and premonitory features. The 6-hour prediction window is suitable for the development of pre-emptive therapy. PMID:24111898

  2. Synchronisation of palaeoenvironmental records over the last 60,000 years, and an extended INTIMATE event stratigraphy to 48,000 b2k

    DEFF Research Database (Denmark)

    Blockley, S.P.E.; Lane, C.S.; Hardiman, M.

    2012-01-01

    study period back to 60,000 years. As a first step, the INTIMATE event stratigraphy has now been extended to include 8000-48,000 b2k based on a combined NGRIP and GRIP isotope profile against a GICC05 chronology and key tephra horizons from Iceland and continental European volcanic sources. In this lead...

  3. A disaggregate model to predict the intercity travel demand

    Energy Technology Data Exchange (ETDEWEB)

    Damodaran, S.

    1988-01-01

    This study was directed towards developing disaggregate models to predict the intercity travel demand in Canada. A conceptual framework for the intercity travel behavior was proposed; under this framework, a nested multinomial model structure that combined mode choice and trip generation was developed. The CTS (Canadian Travel Survey) data base was used for testing the structure and to determine the viability of using this data base for intercity travel-demand prediction. Mode-choice and trip-generation models were calibrated for four modes (auto, bus, rail and air) for both business and non-business trips. The models were linked through the inclusive value variable, also referred to as the long sum of the denominator in the literature. Results of the study indicated that the structure used in this study could be applied for intercity travel-demand modeling. However, some limitations of the data base were identified. It is believed that, with some modifications, the CTS data could be used for predicting intercity travel demand. Future research can identify the factors affecting intercity travel behavior, which will facilitate collection of useful data for intercity travel prediction and policy analysis.

  4. Integrated predictive modelling simulations of burning plasma experiment designs

    International Nuclear Information System (INIS)

    Bateman, Glenn; Onjun, Thawatchai; Kritz, Arnold H

    2003-01-01

    Models for the height of the pedestal at the edge of H-mode plasmas (Onjun T et al 2002 Phys. Plasmas 9 5018) are used together with the Multi-Mode core transport model (Bateman G et al 1998 Phys. Plasmas 5 1793) in the BALDUR integrated predictive modelling code to predict the performance of the ITER (Aymar A et al 2002 Plasma Phys. Control. Fusion 44 519), FIRE (Meade D M et al 2001 Fusion Technol. 39 336), and IGNITOR (Coppi B et al 2001 Nucl. Fusion 41 1253) fusion reactor designs. The simulation protocol used in this paper is tested by comparing predicted temperature and density profiles against experimental data from 33 H-mode discharges in the JET (Rebut P H et al 1985 Nucl. Fusion 25 1011) and DIII-D (Luxon J L et al 1985 Fusion Technol. 8 441) tokamaks. The sensitivities of the predictions are evaluated for the burning plasma experimental designs by using variations of the pedestal temperature model that are one standard deviation above and below the standard model. Simulations of the fusion reactor designs are carried out for scans in which the plasma density and auxiliary heating power are varied

  5. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    Science.gov (United States)

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  6. Coupled Model of Artificial Neural Network and Grey Model for Tendency Prediction of Labor Turnover

    Directory of Open Access Journals (Sweden)

    Yueru Ma

    2014-01-01

    Full Text Available The tendency of labor turnover in the Chinese enterprise shows the characteristics of seasonal fluctuations and irregular distribution of various factors, especially the Chinese traditional social and cultural characteristics. In this paper, we present a coupled model for the tendency prediction of labor turnover. In the model, a time series of tendency prediction of labor turnover was expressed as trend item and its random item. Trend item of tendency prediction of labor turnover is predicted using Grey theory. Random item of trend item is calculated by artificial neural network model (ANN. A case study is presented by the data of 24 months in a Chinese matured enterprise. The model uses the advantages of “accumulative generation” of a Grey prediction method, which weakens the original sequence of random disturbance factors and increases the regularity of data. It also takes full advantage of the ANN model approximation performance, which has a capacity to solve economic problems rapidly, describes the nonlinear relationship easily, and avoids the defects of Grey theory.

  7. Validation of a predictive model for smart control of electrical energy storage

    NARCIS (Netherlands)

    Homan, Bart; van Leeuwen, Richard Pieter; Smit, Gerardus Johannes Maria; Zhu, Lei; de Wit, Jan B.

    2016-01-01

    The purpose of this paper is to investigate the applicability of a relatively simple model which is based on energy conservation for model predictions as part of smart control of thermal and electric storage. The paper reviews commonly used predictive models. Model predictions of charging and

  8. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....

  9. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  10. Evaluating Predictive Models of Software Quality

    Science.gov (United States)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  11. The magnetic polarity stratigraphy of the Mauch Chunk Formation, Pennsylvania.

    Science.gov (United States)

    Opdyke, Neil D; DiVenere, Victor J

    2004-09-14

    Three sections of Chesterian Mauch Chunk Formation in Pennsylvania have been studied paleomagnetically to determine a Late Mississippian magnetic polarity stratigraphy. The upper section at Lavelle includes a conglomerate with abundant red siltstone rip-up clasts that yielded a positive conglomerate test. All samples were subjected to progressive thermal demagnetization to temperatures as high as 700 degrees C. Two components of magnetization were isolated: a synfolding "B" component and the prefolding "C" component. The conglomerate test is positive, indicating that the C component was acquired very early in the history of the sediment. A coherent pattern of magnetic polarity reversals was identified. Five magnetozones were identified in the upper Lavelle section, which yields a pattern that is an excellent match with the pattern of reversals obtained from the upper Mauch Chunk at the original type section of the Mississippian/Pennsylvanian boundary at Pottsville, PA. The frequency of reversals in the upper Mississippian, as identified in the Mauch Chunk Formation, is approximately one to two per million years, which is an average for field reversal through time.

  12. Driver's mental workload prediction model based on physiological indices.

    Science.gov (United States)

    Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc

    2017-09-15

    Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.

  13. Lithostratigraphy, petrography, biostratigraphy, and strontium-isotope stratigraphy of the surficial aquifer system of western Collier County, Florida

    Science.gov (United States)

    Edwards, L.E.; Weedman, S.D.; Simmons, R.; Scott, T.M.; Brewster-Wingard, G. L.; Ishman, S.E.; Carlin, N.M.

    1998-01-01

    In 1996, seven cores were recovered in western Collier County, southwestern Florida, to acquire subsurface geologic and hydrologic data to support ground-water modeling efforts. This report presents the lithostratigraphy, X-ray diffraction analyses, petrography, biostratigraphy, and strontium-isotope stratigraphy of these cores. The oldest unit encountered in the study cores is an unnamed formation that is late Miocene. At least four depositional sequences are present within this formation. Calculated age of the formation, based on strontium-isotope stratigraphy, ranges from 9.5 to 5.7 Ma (million years ago). An unconformity within this formation that represents a hiatus of at least 2 million years is indicated in the Old Pump Road core. In two cores, Collier-Seminole and Old Pump Road, the uppermost sediments of the unnamed formation are not dated by strontium isotopes, and, based on the fossils present, these sediments could be as young as Pliocene. In another core (Fakahatchee Strand-Ranger Station), the upper part of the unnamed formation is dated by mollusks as Pliocene. The Tamiami Formation overlies the unnamed formation throughout the study area and is represented by the Ochopee Limestone Member. The unit is Pliocene and probably includes the interval of time near the early/late Pliocene boundary. Strontium-isotope analysis indicates an early Pliocene age (calculated ages range from 5.1 to 3.5 Ma), but the margin of error includes the latest Miocene and the late Pliocene. The dinocyst assemblages in the Ochopee typically are not age-diagnostic, but, near the base of the unit in the Collier-Seminole, Jones Grade, and Fakahatchee Strand State Forest cores, they indicate an age of late Miocene or Pliocene. The molluscan assemblages indicate a Pliocene age for the Ochopee, and a distinctive assemblage of Carditimera arata and Chione cortinaria in several of the cores specifically indicates an age near the early/late Pliocene boundary. Undifferentiated sands

  14. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo

    2016-01-01

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970

  15. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Directory of Open Access Journals (Sweden)

    Jaime Cuevas

    2017-01-01

    Full Text Available The phenomenon of genotype × environment (G × E interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects ( u that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP and Gaussian (Gaussian kernel, GK. The other model has the same genetic component as the first model ( u plus an extra component, f, that captures random effects between environments that were not captured by the random effects u . We used five CIMMYT data sets (one maize and four wheat that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u   and   f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u .

  16. Statistical models for expert judgement and wear prediction

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1994-01-01

    This thesis studies the statistical analysis of expert judgements and prediction of wear. The point of view adopted is the one of information theory and Bayesian statistics. A general Bayesian framework for analyzing both the expert judgements and wear prediction is presented. Information theoretic interpretations are given for some averaging techniques used in the determination of consensus distributions. Further, information theoretic models are compared with a Bayesian model. The general Bayesian framework is then applied in analyzing expert judgements based on ordinal comparisons. In this context, the value of information lost in the ordinal comparison process is analyzed by applying decision theoretic concepts. As a generalization of the Bayesian framework, stochastic filtering models for wear prediction are formulated. These models utilize the information from condition monitoring measurements in updating the residual life distribution of mechanical components. Finally, the application of stochastic control models in optimizing operational strategies for inspected components are studied. Monte-Carlo simulation methods, such as the Gibbs sampler and the stochastic quasi-gradient method, are applied in the determination of posterior distributions and in the solution of stochastic optimization problems. (orig.) (57 refs., 7 figs., 1 tab.)

  17. Extensions of the Rosner-Colditz breast cancer prediction model to include older women and type-specific predicted risk.

    Science.gov (United States)

    Glynn, Robert J; Colditz, Graham A; Tamimi, Rulla M; Chen, Wendy Y; Hankinson, Susan E; Willett, Walter W; Rosner, Bernard

    2017-08-01

    A breast cancer risk prediction rule previously developed by Rosner and Colditz has reasonable predictive ability. We developed a re-fitted version of this model, based on more than twice as many cases now including women up to age 85, and further extended it to a model that distinguished risk factor prediction of tumors with different estrogen/progesterone receptor status. We compared the calibration and discriminatory ability of the original, the re-fitted, and the type-specific models. Evaluation used data from the Nurses' Health Study during the period 1980-2008, when 4384 incident invasive breast cancers occurred over 1.5 million person-years. Model development used two-thirds of study subjects and validation used one-third. Predicted risks in the validation sample from the original and re-fitted models were highly correlated (ρ = 0.93), but several parameters, notably those related to use of menopausal hormone therapy and age, had different estimates. The re-fitted model was well-calibrated and had an overall C-statistic of 0.65. The extended, type-specific model identified several risk factors with varying associations with occurrence of tumors of different receptor status. However, this extended model relative to the prediction of any breast cancer did not meaningfully reclassify women who developed breast cancer to higher risk categories, nor women remaining cancer free to lower risk categories. The re-fitted Rosner-Colditz model has applicability to risk prediction in women up to age 85, and its discrimination is not improved by consideration of varying associations across tumor subtypes.

  18. Improved Modeling and Prediction of Surface Wave Amplitudes

    Science.gov (United States)

    2017-05-31

    AFRL-RV-PS- AFRL-RV-PS- TR-2017-0162 TR-2017-0162 IMPROVED MODELING AND PREDICTION OF SURFACE WAVE AMPLITUDES Jeffry L. Stevens, et al. Leidos...data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented...SUBTITLE Improved Modeling and Prediction of Surface Wave Amplitudes 5a. CONTRACT NUMBER FA9453-14-C-0225 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  19. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    Science.gov (United States)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  20. Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models

    Science.gov (United States)

    Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng

    2014-03-01

    Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).

  1. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit

    2015-04-16

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  2. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit; Dave, Akshat; Ghanem, Bernard

    2015-01-01

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  3. An Intelligent Model for Stock Market Prediction

    Directory of Open Access Journals (Sweden)

    IbrahimM. Hamed

    2012-08-01

    Full Text Available This paper presents an intelligent model for stock market signal prediction using Multi-Layer Perceptron (MLP Artificial Neural Networks (ANN. Blind source separation technique, from signal processing, is integrated with the learning phase of the constructed baseline MLP ANN to overcome the problems of prediction accuracy and lack of generalization. Kullback Leibler Divergence (KLD is used, as a learning algorithm, because it converges fast and provides generalization in the learning mechanism. Both accuracy and efficiency of the proposed model were confirmed through the Microsoft stock, from wall-street market, and various data sets, from different sectors of the Egyptian stock market. In addition, sensitivity analysis was conducted on the various parameters of the model to ensure the coverage of the generalization issue. Finally, statistical significance was examined using ANOVA test.

  4. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  5. Prediction Model for Relativistic Electrons at Geostationary Orbit

    Science.gov (United States)

    Khazanov, George V.; Lyatsky, Wladislaw

    2008-01-01

    We developed a new prediction model for forecasting relativistic (greater than 2MeV) electrons, which provides a VERY HIGH correlation between predicted and actually measured electron fluxes at geostationary orbit. This model implies the multi-step particle acceleration and is based on numerical integrating two linked continuity equations for primarily accelerated particles and relativistic electrons. The model includes a source and losses, and used solar wind data as only input parameters. We used the coupling function which is a best-fit combination of solar wind/interplanetary magnetic field parameters, responsible for the generation of geomagnetic activity, as a source. The loss function was derived from experimental data. We tested the model for four year period 2004-2007. The correlation coefficient between predicted and actual values of the electron fluxes for whole four year period as well as for each of these years is stable and incredibly high (about 0.9). The high and stable correlation between the computed and actual electron fluxes shows that the reliable forecasting these electrons at geostationary orbit is possible.

  6. Miocene uplift of the NE Greenland margin linked to plate tectonics: Seismic evidence from the Greenland Fracture Zone, NE Atlantic

    DEFF Research Database (Denmark)

    Døssing Andreasen, Arne; Japsen, Peter; Watts, Anthony B.

    2016-01-01

    Tectonic models predict that, following breakup, rift margins undergo only decaying thermal subsidence during their post-rift evolution. However, post-breakup stratigraphy beneath the NE Atlantic shelves shows evidence of regional-scale unconformities, commonly cited as outer margin responses to ...... by plate tectonic forces, induced perhaps by a change in the Iceland plume (a hot pulse) and/or by changes in intra-plate stresses related to global tectonics.......Tectonic models predict that, following breakup, rift margins undergo only decaying thermal subsidence during their post-rift evolution. However, post-breakup stratigraphy beneath the NE Atlantic shelves shows evidence of regional-scale unconformities, commonly cited as outer margin responses...... backstripping. We explain the thermo-mechanical coupling and the deposition of contourites by the formation of a continuous plate boundary along the Mohns and Knipovich ridges, leading to an accelerated widening of the Fram Strait. We demonstrate that the IMU event is linked to onset of uplift and massive shelf...

  7. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....

  8. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    Science.gov (United States)

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  9. Robust Model Predictive Control of a Wind Turbine

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    In this work the problem of robust model predictive control (robust MPC) of a wind turbine in the full load region is considered. A minimax robust MPC approach is used to tackle the problem. Nonlinear dynamics of the wind turbine are derived by combining blade element momentum (BEM) theory...... of the uncertain system is employed and a norm-bounded uncertainty model is used to formulate a minimax model predictive control. The resulting optimization problem is simplified by semidefinite relaxation and the controller obtained is applied on a full complexity, high fidelity wind turbine model. Finally...... and first principle modeling of the turbine flexible structure. Thereafter the nonlinear model is linearized using Taylor series expansion around system operating points. Operating points are determined by effective wind speed and an extended Kalman filter (EKF) is employed to estimate this. In addition...

  10. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  11. The prediction of surface temperature in the new seasonal prediction system based on the MPI-ESM coupled climate model

    Science.gov (United States)

    Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.

    2015-05-01

    A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.

  12. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL STRESSES IN ... the transverse residual stress in the x-direction (σx) had a maximum value of 375MPa ... the finite element method are in fair agreement with the experimental results.

  13. A stepwise model to predict monthly streamflow

    Science.gov (United States)

    Mahmood Al-Juboori, Anas; Guven, Aytac

    2016-12-01

    In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

  14. Updated climatological model predictions of ionospheric and HF propagation parameters

    International Nuclear Information System (INIS)

    Reilly, M.H.; Rhoads, F.J.; Goodman, J.M.; Singh, M.

    1991-01-01

    The prediction performances of several climatological models, including the ionospheric conductivity and electron density model, RADAR C, and Ionospheric Communications Analysis and Predictions Program, are evaluated for different regions and sunspot number inputs. Particular attention is given to the near-real-time (NRT) predictions associated with single-station updates. It is shown that a dramatic improvement can be obtained by using single-station ionospheric data to update the driving parameters for an ionospheric model for NRT predictions of f(0)F2 and other ionospheric and HF circuit parameters. For middle latitudes, the improvement extends out thousands of kilometers from the update point to points of comparable corrected geomagnetic latitude. 10 refs

  15. Spectral Neugebauer-based color halftone prediction model accounting for paper fluorescence.

    Science.gov (United States)

    Hersch, Roger David

    2014-08-20

    We present a spectral model for predicting the fluorescent emission and the total reflectance of color halftones printed on optically brightened paper. By relying on extended Neugebauer models, the proposed model accounts for the attenuation by the ink halftones of both the incident exciting light in the UV wavelength range and the emerging fluorescent emission in the visible wavelength range. The total reflectance is predicted by adding the predicted fluorescent emission relative to the incident light and the pure reflectance predicted with an ink-spreading enhanced Yule-Nielsen modified Neugebauer reflectance prediction model. The predicted fluorescent emission spectrum as a function of the amounts of cyan, magenta, and yellow inks is very accurate. It can be useful to paper and ink manufacturers who would like to study in detail the contribution of the fluorescent brighteners and the attenuation of the fluorescent emission by ink halftones.

  16. Key Questions in Building Defect Prediction Models in Practice

    Science.gov (United States)

    Ramler, Rudolf; Wolfmaier, Klaus; Stauder, Erwin; Kossak, Felix; Natschläger, Thomas

    The information about which modules of a future version of a software system are defect-prone is a valuable planning aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. However, constructing effective defect prediction models in an industrial setting involves a number of key questions. In this paper we discuss ten key questions identified in context of establishing defect prediction in a large software development project. Seven consecutive versions of the software system have been used to construct and validate defect prediction models for system test planning. Furthermore, the paper presents initial empirical results from the studied project and, by this means, contributes answers to the identified questions.

  17. Comprehensive fluence model for absolute portal dose image prediction

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2009-01-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  18. Predictive model for survival in patients with gastric cancer.

    Science.gov (United States)

    Goshayeshi, Ladan; Hoseini, Benyamin; Yousefli, Zahra; Khooie, Alireza; Etminani, Kobra; Esmaeilzadeh, Abbas; Golabpour, Amin

    2017-12-01

    Gastric cancer is one of the most prevalent cancers in the world. Characterized by poor prognosis, it is a frequent cause of cancer in Iran. The aim of the study was to design a predictive model of survival time for patients suffering from gastric cancer. This was a historical cohort conducted between 2011 and 2016. Study population were 277 patients suffering from gastric cancer. Data were gathered from the Iranian Cancer Registry and the laboratory of Emam Reza Hospital in Mashhad, Iran. Patients or their relatives underwent interviews where it was needed. Missing values were imputed by data mining techniques. Fifteen factors were analyzed. Survival was addressed as a dependent variable. Then, the predictive model was designed by combining both genetic algorithm and logistic regression. Matlab 2014 software was used to combine them. Of the 277 patients, only survival of 80 patients was available whose data were used for designing the predictive model. Mean ?SD of missing values for each patient was 4.43?.41 combined predictive model achieved 72.57% accuracy. Sex, birth year, age at diagnosis time, age at diagnosis time of patients' family, family history of gastric cancer, and family history of other gastrointestinal cancers were six parameters associated with patient survival. The study revealed that imputing missing values by data mining techniques have a good accuracy. And it also revealed six parameters extracted by genetic algorithm effect on the survival of patients with gastric cancer. Our combined predictive model, with a good accuracy, is appropriate to forecast the survival of patients suffering from Gastric cancer. So, we suggest policy makers and specialists to apply it for prediction of patients' survival.

  19. Radionuclides in fruit systems: Model prediction-experimental data intercomparison study

    International Nuclear Information System (INIS)

    Ould-Dada, Z.; Carini, F.; Eged, K.; Kis, Z.; Linkov, I.; Mitchell, N.G.; Mourlon, C.; Robles, B.; Sweeck, L.; Venter, A.

    2006-01-01

    This paper presents results from an international exercise undertaken to test model predictions against an independent data set for the transfer of radioactivity to fruit. Six models with various structures and complexity participated in this exercise. Predictions from these models were compared against independent experimental measurements on the transfer of 134 Cs and 85 Sr via leaf-to-fruit and soil-to-fruit in strawberry plants after an acute release. Foliar contamination was carried out through wet deposition on the plant at two different growing stages, anthesis and ripening, while soil contamination was effected at anthesis only. In the case of foliar contamination, predicted values are within the same order of magnitude as the measured values for both radionuclides, while in the case of soil contamination models tend to under-predict by up to three orders of magnitude for 134 Cs, while differences for 85 Sr are lower. Performance of models against experimental data is discussed together with the lessons learned from this exercise

  20. A Subsurface Soil Composition and Physical Properties Experiment to Address Mars Regolith Stratigraphy

    Science.gov (United States)

    Richter, L.; Sims, M.; Economou, T.; Stoker, C.; Wright, I.; Tokano, T.

    2004-01-01

    Previous in-situ measurements of soil-like materials on the surface of Mars, in particular during the on-going Mars Exploration Rover missions, have shown complex relationships between composition, exposure to the surface environment, texture, and local rocks. In particular, a diversity in both compositional and physical properties could be established that is interpreted to be diagnostic of the complex geologic history of the martian surface layer. Physical and chemical properties vary laterally and vertically, providing insight into the composition of rocks from which soils derive, and environmental conditions that led to soil formation. They are central to understanding whether habitable environments existed on Mars in the distant past. An instrument the Mole for Soil Compositional Studies and Sampling (MOCSS) - is proposed to allow repeated access to subsurface regolith on Mars to depths of up to 1.5 meters for in-situ measurements of elemental composition and of physical and thermophysical properties, as well as for subsurface sample acquisition. MOCSS is based on the compact PLUTO (PLanetary Underground TOol) Mole system developed for the Beagle 2 lander and incorporates a small X-ray fluorescence spectrometer within the Mole which is a new development. Overall MOCSS mass is approximately 1.4 kilograms. Taken together, the MOCSS science data support to decipher the geologic history at the landing site as compositional and textural stratigraphy if they exist - can be detected at a number of places if the MOCSS were accommodated on a rover such as MSL. Based on uncovered stratigraphy, the regional sequence of depositional and erosional styles can be constrained which has an impact on understanding the ancient history of the Martian near-surface layer, considering estimates of Mars soil production rates of 0.5... 1.0 meters per billion years on the one hand and Mole subsurface access capability of approximately 1.5 meters. An overview of the MOCSS, XRS

  1. Prediction models in in vitro fertilization; where are we? A mini review

    Directory of Open Access Journals (Sweden)

    Laura van Loendersloot

    2014-05-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in 1978, over five million babies have been born worldwide using IVF. Contrary to the perception of many, IVF does not guarantee success. Almost 50% of couples that start IVF will remain childless, even if they undergo multiple IVF cycles. The decision to start or pursue with IVF is challenging due to the high cost, the burden of the treatment, and the uncertain outcome. In optimal counseling on chances of a pregnancy with IVF, prediction models may play a role, since doctors are not able to correctly predict pregnancy chances. There are three phases of prediction model development: model derivation, model validation, and impact analysis. This review provides an overview on predictive factors in IVF, the available prediction models in IVF and provides key principles that can be used to critically appraise the literature on prediction models in IVF. We will address these points by the three phases of model development.

  2. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  3. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  4. Predictions for mt and MW in minimal supersymmetric models

    International Nuclear Information System (INIS)

    Buchmueller, O.; Ellis, J.R.; Flaecher, H.; Isidori, G.

    2009-12-01

    Using a frequentist analysis of experimental constraints within two versions of the minimal supersymmetric extension of the Standard Model, we derive the predictions for the top quark mass, m t , and the W boson mass, m W . We find that the supersymmetric predictions for both m t and m W , obtained by incorporating all the relevant experimental information and state-of-the-art theoretical predictions, are highly compatible with the experimental values with small remaining uncertainties, yielding an improvement compared to the case of the Standard Model. (orig.)

  5. Using a Prediction Model to Manage Cyber Security Threats

    Science.gov (United States)

    Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization. PMID:26065024

  6. A new crack growth model for life prediction under random loading

    International Nuclear Information System (INIS)

    Lee, Ouk Sub; Chen, Zhi Wei

    1999-01-01

    The load interaction effect in variable amplitude fatigue test is a very important issue for correctly predicting fatigue life. Some prediction methods for retardation are reviewed and the problems discussed. The so-called 'under-load' effect is also of importance for a prediction model to work properly under random load spectrum. A new model that is simple in form but combines overload plastic zone and residual stress considerations together with Elber's closure concept is proposed to fully take account of the load-interaction effects including both over-load and under-load effects. Applying this new model to complex load sequence is explored here. Simulations of tests show the improvement of the new model over other models. The best prediction (mostly closely resembling test curve) is given by the newly proposed Chen-Lee model

  7. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  8. Sculpting Mountains: Interactive Terrain Modeling Based on Subsurface Geology.

    Science.gov (United States)

    Cordonnier, Guillaume; Cani, Marie-Paule; Benes, Bedrich; Braun, Jean; Galin, Eric

    2018-05-01

    Most mountain ranges are formed by the compression and folding of colliding tectonic plates. Subduction of one plate causes large-scale asymmetry while their layered composition (or stratigraphy) explains the multi-scale folded strata observed on real terrains. We introduce a novel interactive modeling technique to generate visually plausible, large scale terrains that capture these phenomena. Our method draws on both geological knowledge for consistency and on sculpting systems for user interaction. The user is provided hands-on control on the shape and motion of tectonic plates, represented using a new geologically-inspired model for the Earth crust. The model captures their volume preserving and complex folding behaviors under collision, causing mountains to grow. It generates a volumetric uplift map representing the growth rate of subsurface layers. Erosion and uplift movement are jointly simulated to generate the terrain. The stratigraphy allows us to render folded strata on eroded cliffs. We validated the usability of our sculpting interface through a user study, and compare the visual consistency of the earth crust model with geological simulation results and real terrains.

  9. Predictive modeling of coral disease distribution within a reef system.

    Directory of Open Access Journals (Sweden)

    Gareth J Williams

    2010-02-01

    Full Text Available Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1 coral diseases show distinct associations with multiple environmental factors, 2 incorporating interactions (synergistic collinearities among environmental variables is important when predicting coral disease spatial patterns, and 3 modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA, Porites tissue loss (PorTL, Porites trematodiasis (PorTrem, and Montipora white syndrome (MWS, and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response, led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to

  10. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    Science.gov (United States)

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9

  11. Predictive modeling of mosquito abundance and dengue transmission in Kenya

    Science.gov (United States)

    Caldwell, J.; Krystosik, A.; Mutuku, F.; Ndenga, B.; LaBeaud, D.; Mordecai, E.

    2017-12-01

    Approximately 390 million people are exposed to dengue virus every year, and with no widely available treatments or vaccines, predictive models of disease risk are valuable tools for vector control and disease prevention. The aim of this study was to modify and improve climate-driven predictive models of dengue vector abundance (Aedes spp. mosquitoes) and viral transmission to people in Kenya. We simulated disease transmission using a temperature-driven mechanistic model and compared model predictions with vector trap data for larvae, pupae, and adult mosquitoes collected between 2014 and 2017 at four sites across urban and rural villages in Kenya. We tested predictive capacity of our models using four temperature measurements (minimum, maximum, range, and anomalies) across daily, weekly, and monthly time scales. Our results indicate seasonal temperature variation is a key driving factor of Aedes mosquito abundance and disease transmission. These models can help vector control programs target specific locations and times when vectors are likely to be present, and can be modified for other Aedes-transmitted diseases and arboviral endemic regions around the world.

  12. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  13. Genomic prediction in a nuclear population of layers using single-step models.

    Science.gov (United States)

    Yan, Yiyuan; Wu, Guiqin; Liu, Aiqiao; Sun, Congjiao; Han, Wenpeng; Li, Guangqi; Yang, Ning

    2018-02-01

    Single-step genomic prediction method has been proposed to improve the accuracy of genomic prediction by incorporating information of both genotyped and ungenotyped animals. The objective of this study is to compare the prediction performance of single-step model with a 2-step models and the pedigree-based models in a nuclear population of layers. A total of 1,344 chickens across 4 generations were genotyped by a 600 K SNP chip. Four traits were analyzed, i.e., body weight at 28 wk (BW28), egg weight at 28 wk (EW28), laying rate at 38 wk (LR38), and Haugh unit at 36 wk (HU36). In predicting offsprings, individuals from generation 1 to 3 were used as training data and females from generation 4 were used as validation set. The accuracies of predicted breeding values by pedigree BLUP (PBLUP), genomic BLUP (GBLUP), SSGBLUP and single-step blending (SSBlending) were compared for both genotyped and ungenotyped individuals. For genotyped females, GBLUP performed no better than PBLUP because of the small size of training data, while the 2 single-step models predicted more accurately than the PBLUP model. The average predictive ability of SSGBLUP and SSBlending were 16.0% and 10.8% higher than the PBLUP model across traits, respectively. Furthermore, the predictive abilities for ungenotyped individuals were also enhanced. The average improvements of prediction abilities were 5.9% and 1.5% for SSGBLUP and SSBlending model, respectively. It was concluded that single-step models, especially the SSGBLUP model, can yield more accurate prediction of genetic merits and are preferable for practical implementation of genomic selection in layers. © 2017 Poultry Science Association Inc.

  14. Hybrid Prediction Model of the Temperature Field of a Motorized Spindle

    Directory of Open Access Journals (Sweden)

    Lixiu Zhang

    2017-10-01

    Full Text Available The thermal characteristics of a motorized spindle are the main determinants of its performance, and influence the machining accuracy of computer numerical control machine tools. It is important to accurately predict the thermal field of a motorized spindle during its operation to improve its thermal characteristics. This paper proposes a model to predict the temperature field of a high-speed and high-precision motorized spindle under different working conditions using a finite element model and test data. The finite element model considers the influence of the parameters of the cooling system and the lubrication system, and that of environmental conditions on the coefficient of heat transfer based on test data for the surface temperature of the motorized spindle. A genetic algorithm is used to optimize the coefficient of heat transfer of the spindle, and its temperature field is predicted using a three-dimensional model that employs this optimal coefficient. A prediction model of the 170MD30 temperature field of the motorized spindle is created and simulation data for the temperature field are compared with the test data. The results show that when the speed of the spindle is 10,000 rpm, the relative mean prediction error is 1.5%, and when its speed is 15,000 rpm, the prediction error is 3.6%. Therefore, the proposed prediction model can predict the temperature field of the motorized spindle with high accuracy.

  15. Sr-isotope stratigraphy and dating of Neo-proterozoic carbonates and glacials from the northern and western parts of the Congo Craton

    International Nuclear Information System (INIS)

    Poidevin, J.L.

    2007-01-01

    Numerous occurrences of Neo-proterozoic carbonate platforms and glacigenic litho-facies are present around the Congo craton. They are especially well developed on its western and northern borders, i.e. in the fore-lands of the West Congo and Oubanguides belts. Sr isotopic stratigraphy enables us to characterize the deposition age of some carbonate units from these two domains. The 87 Sr/ 86 Sr isotopic ratios of limestones from the late 'Haut Shiloango' (0.7068) and the 'Schisto-calcaire' (0.7075) of the West-Congo domain are of post-Sturtian and post-Marinoan ages, respectively. The Lenda carbonates (0.7066) from the Northeast of the Democratic Republic of Congo and the limestones (0.7076) from the Bangui Basin, both in the Oubanguides fore-land, are of pre-Sturtian and post-Marinoan ages, respectively. These data associated with lithostratigraphic correlations allow us to ascribe the 'Bas Congo' lower mixtite (tillite) and the Akwokwo tillite (Lindian) to the Sturtian ice age. In the same way, the 'Bas Congo' upper mixtite (tillite) and the Bondo tillite (Bakouma Basin) are likely Marinoan in age. A new synthetic stratigraphy for these Neo-proterozoic domains is developed. (author)

  16. A three-dimensional stratigraphic model for aggrading submarine channels based on laboratory experiments, numerical modeling, and sediment cores

    Science.gov (United States)

    Limaye, A. B.; Komatsu, Y.; Suzuki, K.; Paola, C.

    2017-12-01

    Turbidity currents deliver clastic sediment from continental margins to the deep ocean, and are the main driver of landscape and stratigraphic evolution in many low-relief, submarine environments. The sedimentary architecture of turbidites—including the spatial organization of coarse and fine sediments—is closely related to the aggradation, scour, and lateral shifting of channels. Seismic stratigraphy indicates that submarine, meandering channels often aggrade rapidly relative to lateral shifting, and develop channel sand bodies with high vertical connectivity. In comparison, the stratigraphic architecture developed by submarine, braided is relatively uncertain. We present a new stratigraphic model for submarine braided channels that integrates predictions from laboratory experiments and flow modeling with constraints from sediment cores. In the laboratory experiments, a saline density current developed subaqueous channels in plastic sediment. The channels aggraded to form a deposit with a vertical scale of approximately five channel depths. We collected topography data during aggradation to (1) establish relative stratigraphic age, and (2) estimate the sorting patterns of a hypothetical grain size distribution. We applied a numerical flow model to each topographic surface and used modeled flow depth as a proxy for relative grain size. We then conditioned the resulting stratigraphic model to observed grain size distributions using sediment core data from the Nankai Trough, offshore Japan. Using this stratigraphic model, we establish new, quantitative predictions for the two- and three-dimensional connectivity of coarse sediment as a function of fine-sediment fraction. Using this case study as an example, we will highlight outstanding challenges in relating the evolution of low-relief landscapes to the stratigraphic record.

  17. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo

    2017-01-05

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.

  19. Geomechanical Modeling of CO2 Injection Site to Predict Wellbore Stresses and Strains for the Design of Wellbore Seal Repair Materials

    Science.gov (United States)

    Sobolik, S. R.; Gomez, S. P.; Matteo, E. N.; Stormont, J.

    2015-12-01

    This paper will present the results of large-scale three-dimensional calculations simulating the hydrological-mechanical behavior of a CO2injection reservoir and the resulting effects on wellbore casings and sealant repair materials. A critical aspect of designing effective wellbore seal repair materials is predicting thermo-mechanical perturbations in local stress that can compromise seal integrity. The DOE-NETL project "Wellbore Seal Repair Using Nanocomposite Materials," is interested in the stress-strain history of abandoned wells, as well as changes in local pressure, stress, and temperature conditions that accompany carbon dioxide injection or brine extraction. Two distinct computational models comprise the current modeling effort. The first is a field scale model that uses the stratigraphy, material properties, and injection history from a pilot CO2injection operation in Cranfield, MS to develop a stress-strain history for wellbore locations from 100 to 400 meters from an injection well. The results from the field scale model are used as input to a more detailed model of a wellbore casing. The 3D wellbore model examines the impacts of various loading scenarios on a casing structure. This model has been developed in conjunction with bench-top experiments of an integrated seal system in an idealized scaled wellbore mock-up being used to test candidate seal repair materials. The results from these models will be used to estimate the necessary mechanical properties needed for a successful repair material. This material is based upon work supported by the US Department of Energy (DOE) National Energy Technology Laboratory (NETL) under Grant Number DE-FE0009562. This project is managed and administered by the Storage Division of the NETL and funded by DOE/NETL and cost-sharing partners. This work was funded in part by the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the US Department of Energy, Office of Science

  20. Graptolite assemblages and stratigraphy of the lower Silurian Mrákotín Formation, Hlinsko Zone, NE interior of the Bohemian Massif (Czech Republic)

    Czech Academy of Sciences Publication Activity Database

    Štorch, Petr; Kraft, P.

    2009-01-01

    Roč. 84, č. 1 (2009), s. 51-74 ISSN 1214-1119 R&D Projects: GA AV ČR IAA3013405 Institutional research plan: CEZ:AV0Z30130516 Keywords : graptolites * stratigraphy * Llandovery * Hlinsko Zone * Bohemian Massif Subject RIV: DB - Geology ; Mineralogy Impact factor: 0.983, year: 2009

  1. Data Quality Enhanced Prediction Model for Massive Plant Data

    International Nuclear Information System (INIS)

    Park, Moon-Ghu; Kang, Seong-Ki; Shin, Hajin

    2016-01-01

    This paper introduces an integrated signal preconditioning and model prediction mainly by kernel functions. The performance and benefits of the methods are demonstrated by a case study with measurement data from a power plant and its components transient data. The developed methods will be applied as a part of monitoring massive or big data platform where human experts cannot detect the fault behaviors due to too large size of the measurements. Recent extensive efforts for on-line monitoring implementation insists that a big surprise in the modeling for predicting process variables was the extent of data quality problems in measurement data especially for data-driven modeling. Bad data for training will be learned as normal and can make significant degrade in prediction performance. For this reason, the quantity and quality of measurement data in modeling phase need special care. Bad quality data must be removed from training sets to the bad data considered as normal system behavior. This paper presented an integrated structure of supervisory system for monitoring the plants or sensors performance. The quality of the data-driven model is improved with a bilateral kernel filter for preprocessing of the noisy data. The prediction module is also based on kernel regression having the same basis with noise filter. The model structure is optimized by a grouping process with nonlinear Hoeffding correlation function

  2. Data Quality Enhanced Prediction Model for Massive Plant Data

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moon-Ghu [Nuclear Engr. Sejong Univ., Seoul (Korea, Republic of); Kang, Seong-Ki [Monitoring and Diagnosis, Suwon (Korea, Republic of); Shin, Hajin [Saint Paul Preparatory Seoul, Seoul (Korea, Republic of)

    2016-10-15

    This paper introduces an integrated signal preconditioning and model prediction mainly by kernel functions. The performance and benefits of the methods are demonstrated by a case study with measurement data from a power plant and its components transient data. The developed methods will be applied as a part of monitoring massive or big data platform where human experts cannot detect the fault behaviors due to too large size of the measurements. Recent extensive efforts for on-line monitoring implementation insists that a big surprise in the modeling for predicting process variables was the extent of data quality problems in measurement data especially for data-driven modeling. Bad data for training will be learned as normal and can make significant degrade in prediction performance. For this reason, the quantity and quality of measurement data in modeling phase need special care. Bad quality data must be removed from training sets to the bad data considered as normal system behavior. This paper presented an integrated structure of supervisory system for monitoring the plants or sensors performance. The quality of the data-driven model is improved with a bilateral kernel filter for preprocessing of the noisy data. The prediction module is also based on kernel regression having the same basis with noise filter. The model structure is optimized by a grouping process with nonlinear Hoeffding correlation function.

  3. Frequency weighted model predictive control of wind turbine

    DEFF Research Database (Denmark)

    Klauco, Martin; Poulsen, Niels Kjølstad; Mirzaei, Mahmood

    2013-01-01

    This work is focused on applying frequency weighted model predictive control (FMPC) on three blade horizontal axis wind turbine (HAWT). A wind turbine is a very complex, non-linear system influenced by a stochastic wind speed variation. The reduced dynamics considered in this work are the rotatio...... predictive controller are presented. Statistical comparison between frequency weighted MPC, standard MPC and baseline PI controller is shown as well.......This work is focused on applying frequency weighted model predictive control (FMPC) on three blade horizontal axis wind turbine (HAWT). A wind turbine is a very complex, non-linear system influenced by a stochastic wind speed variation. The reduced dynamics considered in this work...... are the rotational degree of freedom of the rotor and the tower for-aft movement. The MPC design is based on a receding horizon policy and a linearised model of the wind turbine. Due to the change of dynamics according to wind speed, several linearisation points must be considered and the control design adjusted...

  4. Enhancing pavement performance prediction models for the Illinois Tollway System

    Directory of Open Access Journals (Sweden)

    Laxmikanth Premkumar

    2016-01-01

    Full Text Available Accurate pavement performance prediction represents an important role in prioritizing future maintenance and rehabilitation needs, and predicting future pavement condition in a pavement management system. The Illinois State Toll Highway Authority (Tollway with over 2000 lane miles of pavement utilizes the condition rating survey (CRS methodology to rate pavement performance. Pavement performance models developed in the past for the Illinois Department of Transportation (IDOT are used by the Tollway to predict the future condition of its network. The model projects future CRS ratings based on pavement type, thickness, traffic, pavement age and current CRS rating. However, with time and inclusion of newer pavement types there was a need to calibrate the existing pavement performance models, as well as, develop models for newer pavement types.This study presents the results of calibrating the existing models, and developing new models for the various pavement types in the Illinois Tollway network. The predicted future condition of the pavements is used in estimating its remaining service life to failure, which is of immediate use in recommending future maintenance and rehabilitation requirements for the network. Keywords: Pavement performance models, Remaining life, Pavement management

  5. Estimating the magnitude of prediction uncertainties for field-scale P loss models

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, an uncertainty analysis for the Annual P Loss Estima...

  6. Development of a prognostic model for predicting spontaneous singleton preterm birth.

    Science.gov (United States)

    Schaaf, Jelle M; Ravelli, Anita C J; Mol, Ben Willem J; Abu-Hanna, Ameen

    2012-10-01

    To develop and validate a prognostic model for prediction of spontaneous preterm birth. Prospective cohort study using data of the nationwide perinatal registry in The Netherlands. We studied 1,524,058 singleton pregnancies between 1999 and 2007. We developed a multiple logistic regression model to estimate the risk of spontaneous preterm birth based on maternal and pregnancy characteristics. We used bootstrapping techniques to internally validate our model. Discrimination (AUC), accuracy (Brier score) and calibration (calibration graphs and Hosmer-Lemeshow C-statistic) were used to assess the model's predictive performance. Our primary outcome measure was spontaneous preterm birth at model included 13 variables for predicting preterm birth. The predicted probabilities ranged from 0.01 to 0.71 (IQR 0.02-0.04). The model had an area under the receiver operator characteristic curve (AUC) of 0.63 (95% CI 0.63-0.63), the Brier score was 0.04 (95% CI 0.04-0.04) and the Hosmer Lemeshow C-statistic was significant (pvalues of predicted probability. The positive predictive value was 26% (95% CI 20-33%) for the 0.4 probability cut-off point. The model's discrimination was fair and it had modest calibration. Previous preterm birth, drug abuse and vaginal bleeding in the first half of pregnancy were the most important predictors for spontaneous preterm birth. Although not applicable in clinical practice yet, this model is a next step towards early prediction of spontaneous preterm birth that enables caregivers to start preventive therapy in women at higher risk. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  8. Risk prediction model for knee pain in the Nottingham community: a Bayesian modelling approach.

    Science.gov (United States)

    Fernandes, G S; Bhattacharya, A; McWilliams, D F; Ingham, S L; Doherty, M; Zhang, W

    2017-03-20

    Twenty-five percent of the British population over the age of 50 years experiences knee pain. Knee pain can limit physical ability and cause distress and bears significant socioeconomic costs. The objectives of this study were to develop and validate the first risk prediction model for incident knee pain in the Nottingham community and validate this internally within the Nottingham cohort and externally within the Osteoarthritis Initiative (OAI) cohort. A total of 1822 participants from the Nottingham community who were at risk for knee pain were followed for 12 years. Of this cohort, two-thirds (n = 1203) were used to develop the risk prediction model, and one-third (n = 619) were used to validate the model. Incident knee pain was defined as pain on most days for at least 1 month in the past 12 months. Predictors were age, sex, body mass index, pain elsewhere, prior knee injury and knee alignment. A Bayesian logistic regression model was used to determine the probability of an OR >1. The Hosmer-Lemeshow χ 2 statistic (HLS) was used for calibration, and ROC curve analysis was used for discrimination. The OAI cohort from the United States was also used to examine the performance of the model. A risk prediction model for knee pain incidence was developed using a Bayesian approach. The model had good calibration, with an HLS of 7.17 (p = 0.52) and moderate discriminative ability (ROC 0.70) in the community. Individual scenarios are given using the model. However, the model had poor calibration (HLS 5866.28, p prediction model for knee pain, regardless of underlying structural changes of knee osteoarthritis, in the community using a Bayesian modelling approach. The model appears to work well in a community-based population but not in individuals with a higher risk for knee osteoarthritis, and it may provide a convenient tool for use in primary care to predict the risk of knee pain in the general population.

  9. A Comparative Study of Spectral Auroral Intensity Predictions From Multiple Electron Transport Models

    Science.gov (United States)

    Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha

    2018-01-01

    It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.

  10. The Comparison Study of Short-Term Prediction Methods to Enhance the Model Predictive Controller Applied to Microgrid Energy Management

    Directory of Open Access Journals (Sweden)

    César Hernández-Hernández

    2017-06-01

    Full Text Available Electricity load forecasting, optimal power system operation and energy management play key roles that can bring significant operational advantages to microgrids. This paper studies how methods based on time series and neural networks can be used to predict energy demand and production, allowing them to be combined with model predictive control. Comparisons of different prediction methods and different optimum energy distribution scenarios are provided, permitting us to determine when short-term energy prediction models should be used. The proposed prediction models in addition to the model predictive control strategy appear as a promising solution to energy management in microgrids. The controller has the task of performing the management of electricity purchase and sale to the power grid, maximizing the use of renewable energy sources and managing the use of the energy storage system. Simulations were performed with different weather conditions of solar irradiation. The obtained results are encouraging for future practical implementation.

  11. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    Science.gov (United States)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  12. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  13. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  14. Study on prediction model of irradiation embrittlement for reactor pressure vessel steel

    International Nuclear Information System (INIS)

    Wang Rongshan; Xu Chaoliang; Huang Ping; Liu Xiangbing; Ren Ai; Chen Jun; Li Chengliang

    2014-01-01

    The study on prediction model of irradiation embrittlement for reactor pres- sure vessel (RPV) steel is an important method for long term operation. According to the deep analysis of the previous prediction models developed worldwide, the drawbacks of these models were given and a new irradiation embrittlement prediction model PMIE-2012 was developed. A corresponding reliability assessment was carried out by irradiation surveillance data. The assessment results show that the PMIE-2012 have a high reliability and accuracy on irradiation embrittlement prediction. (authors)

  15. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  16. TH-A-9A-01: Active Optical Flow Model: Predicting Voxel-Level Dose Prediction in Spine SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Liu, J; Wu, Q.J.; Yin, F; Kirkpatrick, J; Cabrera, A [Duke University Medical Center, Durham, NC (United States); Ge, Y [University of North Carolina at Charlotte, Charlotte, NC (United States)

    2014-06-15

    Purpose: To predict voxel-level dose distribution and enable effective evaluation of cord dose sparing in spine SBRT. Methods: We present an active optical flow model (AOFM) to statistically describe cord dose variations and train a predictive model to represent correlations between AOFM and PTV contours. Thirty clinically accepted spine SBRT plans are evenly divided into training and testing datasets. The development of predictive model consists of 1) collecting a sequence of dose maps including PTV and OAR (spinal cord) as well as a set of associated PTV contours adjacent to OAR from the training dataset, 2) classifying data into five groups based on PTV's locations relative to OAR, two “Top”s, “Left”, “Right”, and “Bottom”, 3) randomly selecting a dose map as the reference in each group and applying rigid registration and optical flow deformation to match all other maps to the reference, 4) building AOFM by importing optical flow vectors and dose values into the principal component analysis (PCA), 5) applying another PCA to features of PTV and OAR contours to generate an active shape model (ASM), and 6) computing a linear regression model of correlations between AOFM and ASM.When predicting dose distribution of a new case in the testing dataset, the PTV is first assigned to a group based on its contour characteristics. Contour features are then transformed into ASM's principal coordinates of the selected group. Finally, voxel-level dose distribution is determined by mapping from the ASM space to the AOFM space using the predictive model. Results: The DVHs predicted by the AOFM-based model and those in clinical plans are comparable in training and testing datasets. At 2% volume the dose difference between predicted and clinical plans is 4.2±4.4% and 3.3±3.5% in the training and testing datasets, respectively. Conclusion: The AOFM is effective in predicting voxel-level dose distribution for spine SBRT. Partially supported by NIH

  17. TH-A-9A-01: Active Optical Flow Model: Predicting Voxel-Level Dose Prediction in Spine SBRT

    International Nuclear Information System (INIS)

    Liu, J; Wu, Q.J.; Yin, F; Kirkpatrick, J; Cabrera, A; Ge, Y

    2014-01-01

    Purpose: To predict voxel-level dose distribution and enable effective evaluation of cord dose sparing in spine SBRT. Methods: We present an active optical flow model (AOFM) to statistically describe cord dose variations and train a predictive model to represent correlations between AOFM and PTV contours. Thirty clinically accepted spine SBRT plans are evenly divided into training and testing datasets. The development of predictive model consists of 1) collecting a sequence of dose maps including PTV and OAR (spinal cord) as well as a set of associated PTV contours adjacent to OAR from the training dataset, 2) classifying data into five groups based on PTV's locations relative to OAR, two “Top”s, “Left”, “Right”, and “Bottom”, 3) randomly selecting a dose map as the reference in each group and applying rigid registration and optical flow deformation to match all other maps to the reference, 4) building AOFM by importing optical flow vectors and dose values into the principal component analysis (PCA), 5) applying another PCA to features of PTV and OAR contours to generate an active shape model (ASM), and 6) computing a linear regression model of correlations between AOFM and ASM.When predicting dose distribution of a new case in the testing dataset, the PTV is first assigned to a group based on its contour characteristics. Contour features are then transformed into ASM's principal coordinates of the selected group. Finally, voxel-level dose distribution is determined by mapping from the ASM space to the AOFM space using the predictive model. Results: The DVHs predicted by the AOFM-based model and those in clinical plans are comparable in training and testing datasets. At 2% volume the dose difference between predicted and clinical plans is 4.2±4.4% and 3.3±3.5% in the training and testing datasets, respectively. Conclusion: The AOFM is effective in predicting voxel-level dose distribution for spine SBRT. Partially supported by NIH

  18. The Complexity of Developmental Predictions from Dual Process Models

    Science.gov (United States)

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  19. A polynomial based model for cell fate prediction in human diseases.

    Science.gov (United States)

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  20. Test of 1-D transport models, and their predictions for ITER

    International Nuclear Information System (INIS)

    Mikkelsen, D.; Bateman, G.; Boucher, D.

    2001-01-01

    A number of proposed tokamak thermal transport models are tested by comparing their predictions with measurements from several tokamaks. The necessary data have been provided for a total of 75 discharges from C-mod, DIII-D, JET, JT-60U, T10, and TFTR. A standard prediction methodology has been developed, and three codes have been benchmarked; these 'standard' codes have been relied on for testing most of the transport models. While a wide range of physical transport processes has been tested, no single model has emerged as clearly superior to all competitors for simulating H-mode discharges. In order to winnow the field, further tests of the effect of sheared flows and of the 'stiffness' of transport are planned. Several of the models have been used to predict ITER performance, with widely varying results. With some transport models ITER's predicted fusion power depends strongly on the 'pedestal' temperature, but ∼ 1GW (Q=10) is predicted for most models if the pedestal temperature is at least 4 keV. (author)