WorldWideScience

Sample records for macro-scale baroclinic prediction

  1. Scaling up: Assessing social impacts at the macro-scale

    International Nuclear Information System (INIS)

    Schirmer, Jacki

    2011-01-01

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  2. Characteristics of soil water retention curve at macro-scale

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Scale adaptable hydrological models have attracted more and more attentions in the hydrological modeling research community, and the constitutive relationship at the macro-scale is one of the most important issues, upon which there are not enough research activities yet. Taking the constitutive relationships of soil water movement--soil water retention curve (SWRC) as an example, this study extends the definition of SWRC at the micro-scale to that at the macro-scale, and aided by Monte Carlo method we demonstrate that soil property and the spatial distribution of soil moisture will affect the features of SWRC greatly. Furthermore, we assume that the spatial distribution of soil moisture is the result of self-organization of climate, soil, ground water and soil water movement under the specific boundary conditions, and we also carry out numerical experiments of soil water movement at the vertical direction in order to explore the relationship between SWRC at the macro-scale and the combinations of climate, soil, and groundwater. The results show that SWRCs at the macro-scale and micro-scale presents totally different features, e.g., the essential hysteresis phenomenon which is exaggerated with increasing aridity index and rising groundwater table. Soil property plays an important role in the shape of SWRC which will even lead to a rectangular shape under drier conditions, and power function form of SWRC widely adopted in hydrological model might be revised for most situations at the macro-scale.

  3. Predator-prey interactions as macro-scale drivers of species diversity in mammals

    DEFF Research Database (Denmark)

    Sandom, Christopher James; Sandel, Brody Steven; Dalby, Lars

    Background/Question/Methods Understanding the importance of predator-prey interactions for species diversity is a central theme in ecology, with fundamental consequences for predicting the responses of ecosystems to land use and climate change. We assessed the relative support for different...... mechanistic drivers of mammal species richness at macro-scales for two trophic levels: predators and prey. To disentangle biotic (i.e. functional predator-prey interactions) from abiotic (i.e. environmental) and bottom-up from top-down determinants we considered three hypotheses: 1) environmental factors...... that determine ecosystem productivity drive prey and predator richness (the productivity hypothesis, abiotic, bottom-up), 2) consumer richness is driven by resource diversity (the resource diversity hypothesis, biotic, bottom-up) and 3) consumers drive richness of their prey (the top-down hypothesis, biotic, top...

  4. Macro-scale turbulence modelling for flows in porous media

    International Nuclear Information System (INIS)

    Pinson, F.

    2006-03-01

    - This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of

  5. Quantum manifestation of systems on the macro-scale – the concept ...

    Indian Academy of Sciences (India)

    Transition amplitude; inelastic scattering; macro-scale quantum effects. ... ingly large wavelength of ∼5 cm for typical parameters (electron energy ε ∼ 1 keV ...... and hence as the generator of the transition amplitude wave at its position. As.

  6. Line-scan macro-scale Raman chemical imaging for authentication of powdered foods and ingredients

    Science.gov (United States)

    Adulteration and fraud for powdered foods and ingredients are rising food safety risks that threaten consumers’ health. In this study, a newly developed line-scan macro-scale Raman imaging system using a 5 W 785 nm line laser as excitation source was used to authenticate the food powders. The system...

  7. Data-Science Analysis of the Macro-scale Features Governing the Corrosion to Crack Transition in AA7050-T7451

    Science.gov (United States)

    Co, Noelle Easter C.; Brown, Donald E.; Burns, James T.

    2018-05-01

    This study applies data science approaches (random forest and logistic regression) to determine the extent to which macro-scale corrosion damage features govern the crack formation behavior in AA7050-T7451. Each corrosion morphology has a set of corresponding predictor variables (pit depth, volume, area, diameter, pit density, total fissure length, surface roughness metrics, etc.) describing the shape of the corrosion damage. The values of the predictor variables are obtained from white light interferometry, x-ray tomography, and scanning electron microscope imaging of the corrosion damage. A permutation test is employed to assess the significance of the logistic and random forest model predictions. Results indicate minimal relationship between the macro-scale corrosion feature predictor variables and fatigue crack initiation. These findings suggest that the macro-scale corrosion features and their interactions do not solely govern the crack formation behavior. While these results do not imply that the macro-features have no impact, they do suggest that additional parameters must be considered to rigorously inform the crack formation location.

  8. Handbook of damage mechanics nano to macro scale for materials and structures

    CERN Document Server

    2015-01-01

    This authoritative reference provides comprehensive coverage of the topics of damage and healing mechanics. Computational modeling of constitutive equations is provided as well as solved examples in engineering applications. A wide range of materials that engineers may encounter are covered, including metals, composites, ceramics, polymers, biomaterials, and nanomaterials. The internationally recognized team of contributors employ a consistent and systematic approach, offering readers a user-friendly reference that is ideal for frequent consultation. Handbook of Damage Mechanics: Nano to Macro Scale for Materials and Structures is ideal for graduate students and faculty, researchers, and professionals in the fields of Mechanical Engineering, Civil Engineering, Aerospace Engineering, Materials Science, and Engineering Mechanics.

  9. Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance

    Energy Technology Data Exchange (ETDEWEB)

    Gu, Wenbin [General Motors LLC, Pontiac, MI (United States)

    2014-08-29

    This report documents the work performed by General Motors (GM) under the Cooperative agreement No. DE-EE0000470, “Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance,” in collaboration with the Penn State University (PSU), University of Tennessee Knoxville (UTK), Rochester Institute of Technology (RIT), and University of Rochester (UR) via subcontracts. The overall objectives of the project are to investigate and synthesize fundamental understanding of transport phenomena at both the macro- and micro-scales for the development of a down-the-channel model that accounts for all transport domains in a broad operating space. GM as a prime contractor focused on cell level experiments and modeling, and the Universities as subcontractors worked toward fundamental understanding of each component and associated interface.

  10. Digital holographic setups for phase object measurements in micro and macro scale

    Directory of Open Access Journals (Sweden)

    Lédl Vít

    2015-01-01

    Full Text Available The measurement of properties of so called phase objects is being solved for more than one Century starting probably with schlieren technique 1. Classical interferometry served as a great measurement tool for several decades and was replaced by holographic interferometry, which disposes with many benefits when compared to classical interferometry. Holographic interferometry undergone an enormous development in last decade when digital holography has been established as a standard technique and most of the drawbacks were solved. The paper deals with scope of the huge applicability of digital holographic interferometry in heat and mass transfer measurement from micro to macro scale and from simple 2D measurement up to complex tomographic techniques. Recently the very complex experimental setups are under development in our labs combining many techniques leading to digital holographic micro tomography methods.

  11. Scaling of saturation amplitudes in baroclinic instability

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-01-01

    By using finite-amplitude conservation laws for pseudomomentum and pseudoenergy, rigorous upper bounds have been derived on the saturation amplitudes in baroclinic instability for layered and continuously-stratified quasi-geostrophic models. Bounds have been obtained for both the eddy energy and the eddy potential enstrophy. The bounds apply to conservative (inviscid, unforced) flow, as well as to forced-dissipative flow when the dissipation is proportional to the potential vorticity. This approach provides an efficient way of extracting an analytical estimate of the dynamical scalings of the saturation amplitudes in terms of crucial non-dimensional parameters. A possible use is in constructing eddy parameterization schemes for zonally-averaged climate models. The scaling dependences are summarized, and compared with those derived from weakly-nonlinear theory and from baroclinic-adjustment estimates

  12. From micro-scale 3D simulations to macro-scale model of periodic porous media

    Science.gov (United States)

    Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca

    2015-04-01

    In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a

  13. Nondestructive chemical imaging of wood at the micro-scale: advanced technology to complement macro-scale evaluations

    Science.gov (United States)

    Barbara L. Illman; Julia Sedlmair; Miriam Unger; Carol Hirschmugl

    2013-01-01

    Chemical images help understanding of wood properties, durability, and cell wall deconstruction for conversion of lignocellulose to biofuels, nanocellulose and other value added chemicals in forest biorefineries. We describe here a new method for nondestructive chemical imaging of wood and wood-based materials at the micro-scale to complement macro-scale methods based...

  14. Modelling PM10 aerosol data from the Qalabotjha low-smoke fuels macro-scale experiment in South Africa

    CSIR Research Space (South Africa)

    Engelbrecht, JP

    2000-03-30

    Full Text Available for combustion in cooking and heating appliances are being con- sidered to mitigate human exposure to D-grade coal combustion emissions. In 1997, South Africa's Department of Minerals and Energy conducted a macro-scale experiment to test three brands of low...

  15. Stages in the energetics of baroclinic systems

    Science.gov (United States)

    Orlanski, Isidoro; Sheldon, John P.

    1995-10-01

    The results from several idealized and case studies are drawn together to form a comprehensive picture of "downstream baroclinic evolution" using local energetics. This new viewpoint offers a complementary alternative to the more conventional descriptions of cyclone development. These additional insights are made possible largely because the local energetics approach permits one to define an energy flux vector which accurately describes the direction of energy dispersion and quantifies the role of neighboring systems in local development. In this view, the development of a system's energetics is divided into three stages. In Stage 1, a pre-existing disturbance well upstream of an incipient trough loses energy via ageostrophic geopotential fluxes directed downstream through the intervening ridge, generating a new energy center there. In Stage 2, this new energy center grows vigorously, at first due to the convergence of these fluxes, and later by baroclinic conversion as well. As the center matures, it begins to export energy via geopotential fluxes to the eastern side of the trough, initiating yet another energy center. In Stage 3, this new energy center continues to grow while that on the western side of the trough decays due to a dwinding supply of energy via fluxes from the older upstream system and also as a consequence of its own export of energy downstream. As the eastern energy center matures, it exports energy further downstream, and the sequence begins anew. The USA "Blizzard of'93" is used as a new case study to test the limits to which this conceptual sequence might apply, as well as to augment the current limited set of case studies. It is shown that, despite the extraordinary magnitude of the event, the evolution of the trough associated with the Blizzard fits the conceptual picture of downstream baroclinic evolution quite well, with geopotential fluxes playing a critical rôle in three respects. First, fluxes from an old, decaying system in the

  16. Electrical current at micro-/macro-scale of undoped and nitrogen-doped MWPECVD diamond films

    Science.gov (United States)

    Cicala, G.; Velardi, L.; Senesi, G. S.; Picca, R. A.; Cioffi, N.

    2017-12-01

    Chemical, structural, morphological and micro-/macro-electrical properties of undoped and nitrogen-(N-)doped diamond films are determined by X-ray photoelectron spectroscopy, Raman and photoluminescence spectroscopies, field emission scanning electron microscopy, atomic force microscopy, scanning capacitance microscopy (SCM) and two points technique for I-V characteristics, respectively. The characterization results are very useful to examine and understand the relationship among these properties. The effect of the nitrogen incorporation in diamond films is investigated through the evolution of the chemical, structural, morphological and topographical features and of the electrical behavior. The distribution of the electrical current is first assessed at millimeter scale on the surface of diamond films and then at micrometer scale on small regions in order to establish the sites where the carriers preferentially move. Specifically, the SCM images indicate a non-uniform distribution of carriers on the morphological structures mainly located along the grain boundaries. A good agreement is found by comparing the electrical currents at the micro- and macro-scale. This work aims to highlight phenomena such as photo- and thermionic emission from N-doped diamond useful for microelectronic engineering.

  17. An new MHD/kinetic model for exploring energetic particle production in macro-scale systems

    Science.gov (United States)

    Drake, J. F.; Swisdak, M.; Dahlin, J. T.

    2017-12-01

    A novel MHD/kinetic model is being developed to explore magneticreconnection and particle energization in macro-scale systems such asthe solar corona and the outer heliosphere. The model blends the MHDdescription with a macro-particle description. The rationale for thismodel is based on the recent discovery that energetic particleproduction during magnetic reconnection is controlled by Fermireflection and Betatron acceleration and not parallel electricfields. Since the former mechanisms are not dependent on kineticscales such as the Debye length and the electron and ion inertialscales, a model that sheds these scales is sufficient for describingparticle acceleration in macro-systems. Our MHD/kinetic model includesmacroparticles laid out on an MHD grid that are evolved with the MHDfields. Crucially, the feedback of the energetic component on the MHDfluid is included in the dynamics. Thus, energy of the total system,the MHD fluid plus the energetic component, is conserved. The systemhas no kinetic scales and therefore can be implemented to modelenergetic particle production in macro-systems with none of theconstraints associated with a PIC model. Tests of the new model insimple geometries will be presented and potential applications will bediscussed.

  18. Construction of Modular Hydrogel Sheets for Micropatterned Macro-scaled 3D Cellular Architecture.

    Science.gov (United States)

    Son, Jaejung; Bae, Chae Yun; Park, Je-Kyun

    2016-01-11

    Hydrogels can be patterned at the micro-scale using microfluidic or micropatterning technologies to provide an in vivo-like three-dimensional (3D) tissue geometry. The resulting 3D hydrogel-based cellular constructs have been introduced as an alternative to animal experiments for advanced biological studies, pharmacological assays and organ transplant applications. Although hydrogel-based particles and fibers can be easily fabricated, it is difficult to manipulate them for tissue reconstruction. In this video, we describe a fabrication method for micropatterned alginate hydrogel sheets, together with their assembly to form a macro-scale 3D cell culture system with a controlled cellular microenvironment. Using a mist form of the calcium gelling agent, thin hydrogel sheets are easily generated with a thickness in the range of 100 - 200 µm, and with precise micropatterns. Cells can then be cultured with the geometric guidance of the hydrogel sheets in freestanding conditions. Furthermore, the hydrogel sheets can be readily manipulated using a micropipette with an end-cut tip, and can be assembled into multi-layered structures by stacking them using a patterned polydimethylsiloxane (PDMS) frame. These modular hydrogel sheets, which can be fabricated using a facile process, have potential applications of in vitro drug assays and biological studies, including functional studies of micro- and macrostructure and tissue reconstruction.

  19. Molecular and macro-scale analysis of enzyme-crosslinked silk hydrogels for rational biomaterial design.

    Science.gov (United States)

    McGill, Meghan; Coburn, Jeannine M; Partlow, Benjamin P; Mu, Xuan; Kaplan, David L

    2017-11-01

    Silk fibroin-based hydrogels have exciting applications in tissue engineering and therapeutic molecule delivery; however, their utility is dependent on their diffusive properties. The present study describes a molecular and macro-scale investigation of enzymatically-crosslinked silk fibroin hydrogels, and demonstrates that these systems have tunable crosslink density and diffusivity. We developed a liquid chromatography tandem mass spectroscopy (LC-MS/MS) method to assess the quantity and order of covalent tyrosine crosslinks in the hydrogels. This analysis revealed between 28 and 56% conversion of tyrosine to dityrosine, which was dependent on the silk concentration and reactant concentration. The crosslink density was then correlated with storage modulus, revealing that both crosslinking and protein concentration influenced the mechanical properties of the hydrogels. The diffusive properties of the bulk material were studied by fluorescence recovery after photobleaching (FRAP), which revealed a non-linear relationship between silk concentration and diffusivity. As a result of this work, a model for synthesizing hydrogels with known crosslink densities and diffusive properties has been established, enabling the rational design of silk hydrogels for biomedical applications. Hydrogels from naturally-derived silk polymers offer versitile opportunities in the biomedical field, however, their design has largely been an empirical process. We present a fundamental study of the crosslink density, storage modulus, and diffusion behavior of enzymatically-crosslinked silk hydrogels to better inform scaffold design. These studies revealed unexpected non-linear trends in the crosslink density and diffusivity of silk hydrogels with respect to protein concentration and crosslink reagent concentration. This work demonstrates the tunable diffusivity and crosslinking in silk fibroin hydrogels, and enables the rational design of biomaterials. Further, the characterization methods

  20. Hydrocarbon Migration from the Micro to Macro Scale in the Gulf of Mexico

    Science.gov (United States)

    Johansen, C.; Marty, E.; Silva, M.; Natter, M.; Shedd, W. W.; Hill, J. C.; Viso, R. F.; Lobodin, V.; Krajewski, L.; Abrams, M.; MacDonald, I. R.

    2016-02-01

    In the Northern Gulf of Mexico (GoM) at GC600, ECOGIG has been investigating the processes involved in hydrocarbon migration from deep reservoirs to sea surface. We studied two individual vents, Birthday Candles (BC) and Mega-Plume (MP), which are separated by 1km on a salt supported ridge trending from NW-SE. Seismic data depicts two faults, also separated by 1km, feeding into the surface gas hydrate region. BC and MP comprise the range between oily, mixed, and gaseous-type vents. In both cases bubbles are observed escaping from gas hydrate out crops at the sea floor and supporting chemosynthetic communities. Fluid flow is indicated by features on the sea floor such as hydrate mounds, authigenic carbonates, brine pools, mud volcanoes, and biology. We propose a model to describe the upward flow of hydrocarbons from three vertical scales, each dominated by different factors: 1) macro (capillary failure in overlying cap rocks causing reservoir leakage), 2) meso (buoyancy driven fault migration), and 3) micro (hydrate formation and chemosynthetic activity). At the macro scale we use high reflectivity in seismic data and sediment pore throat radii to determine the formation of fractures in leaky reservoirs. Once oil and gas leave the reservoir through fractures in the cap rock they migrate in separate phases. At the meso scale we use seismic data to locate faults and salt diapirs that form conduits for buoyant hydrocarbons follow. This connects the path to the micro scale where we used video data to observe bubble release from individual vents for extended periods of time (3h-26d), and developed an image processing program to quantify bubble release rates. At mixed vents gaseous bubbles are observed escaping hydrate outcrops with a coating of oil varying in thickness. Bubble oil and gas ratios are estimated using average bubble size and release rates. The relative vent age can be described by carbonate hard ground cover, biological activity, and hydrate mound formation

  1. Implementation and adaptation of a macro-scale methodology to calculate direct economic losses

    Science.gov (United States)

    Natho, Stephanie; Thieken, Annegret

    2017-04-01

    forestry sector. Furthermore overheads are proposed to include costs of housing content as well as the overall costs of public infrastructure, one of the most important damage sectors. All constants considering sector specific mean sizes or construction costs were adapted. Loss ratios were adapted for each event. Whereas the original UNISDR method over- und underestimates the losses of the tested events, the adapted method is able to calculate losses in good accordance for river floods, hail storms and storms. For example, for the 2013-flood economic losses of EUR 6.3 billion were calculated (UNISDR EUR 0.85 billion, documentation EUR 11 billion). For the hail storms in 2013 the calculated EUR 3.6 billion overestimate the documented losses of EUR 2.7 billion less than the original UNISDR approach with EUR 5.2 billion. Only for flash floods, where public infrastructure can account for more than 90% of total losses, the method is absolutely not applicable. The adapted methodology serves as a good starting point for macro-scale loss estimations by accounting for the most important damage sectors. By implementing this approach into damage and event documentation and reporting standards, a consistent monitoring according to the SFDRR could be achieved.

  2. Vertical propagation of baroclinic Kelvin waves along the west coast ...

    Indian Academy of Sciences (India)

    Second, baroclinic Kelvin waves generated in the Bay of Bengal at periods shorter than about 120 ... significant energy remains trapped to the Indian west coast. .... ary condition, enables us to isolate the response of the West India Coastal ...

  3. Local Dynamics of Baroclinic Waves in the Martian Atmosphere

    KAUST Repository

    Kavulich, Michael J.; Szunyogh, Istvan; Gyarmati, Gyorgyi; Wilson, R. John

    2013-01-01

    The paper investigates the processes that drive the spatiotemporal evolution of baroclinic transient waves in the Martian atmosphere by a simulation experiment with the Geophysical Fluid Dynamics Laboratory (GFDL) Mars general circulation model (GCM). The main diagnostic tool of the study is the (local) eddy kinetic energy equation. Results are shown for a prewinter season of the Northern Hemisphere, in which a deep baroclinic wave of zonal wavenumber 2 circles the planet at an eastward phase speed of about 70° Sol-1 (Sol is a Martian day). The regular structure of the wave gives the impression that the classical models of baroclinic instability, which describe the underlying process by a temporally unstable global wave (e.g., Eady model and Charney model), may have a direct relevance for the description of the Martian baroclinic waves. The results of the diagnostic calculations show, however, that while the Martian waves remain zonally global features at all times, there are large spatiotemporal changes in their amplitude. The most intense episodes of baroclinic energy conversion, which take place in the two great plain regions (Acidalia Planitia and Utopia Planitia), are strongly localized in both space and time. In addition, similar to the situation for terrestrial baroclinic waves, geopotential flux convergence plays an important role in the dynamics of the downstream-propagating unstable waves. © 2013 American Meteorological Society.

  4. Local Dynamics of Baroclinic Waves in the Martian Atmosphere

    KAUST Repository

    Kavulich, Michael J.

    2013-11-01

    The paper investigates the processes that drive the spatiotemporal evolution of baroclinic transient waves in the Martian atmosphere by a simulation experiment with the Geophysical Fluid Dynamics Laboratory (GFDL) Mars general circulation model (GCM). The main diagnostic tool of the study is the (local) eddy kinetic energy equation. Results are shown for a prewinter season of the Northern Hemisphere, in which a deep baroclinic wave of zonal wavenumber 2 circles the planet at an eastward phase speed of about 70° Sol-1 (Sol is a Martian day). The regular structure of the wave gives the impression that the classical models of baroclinic instability, which describe the underlying process by a temporally unstable global wave (e.g., Eady model and Charney model), may have a direct relevance for the description of the Martian baroclinic waves. The results of the diagnostic calculations show, however, that while the Martian waves remain zonally global features at all times, there are large spatiotemporal changes in their amplitude. The most intense episodes of baroclinic energy conversion, which take place in the two great plain regions (Acidalia Planitia and Utopia Planitia), are strongly localized in both space and time. In addition, similar to the situation for terrestrial baroclinic waves, geopotential flux convergence plays an important role in the dynamics of the downstream-propagating unstable waves. © 2013 American Meteorological Society.

  5. Thermo-mechanical efficiency of the bimetallic strip heat engine at the macro-scale and micro-scale

    International Nuclear Information System (INIS)

    Arnaud, A; Boughaleb, J; Monfray, S; Boeuf, F; Skotnicki, T; Cugat, O

    2015-01-01

    Bimetallic strip heat engines are energy harvesters that exploit the thermo-mechanical properties of bistable bimetallic membranes to convert heat into mechanical energy. They thus represent a solution to transform low-grade heat into electrical energy if the bimetallic membrane is coupled with an electro-mechanical transducer. The simplicity of these devices allows us to consider their miniaturization using MEMS fabrication techniques. In order to design and optimize these devices at the macro-scale and micro-scale, this article proposes an explanation of the origin of the thermal snap-through by giving the expressions of the constitutive equations of composite beams. This allows us to evaluate the capability of bimetallic strips to convert heat into mechanical energy whatever their size is, and to give the theoretical thermo-mechanical efficiencies which can be obtained with these harvesters. (paper)

  6. Baroclinic multipole formation from heton interaction

    International Nuclear Information System (INIS)

    Sokolovskiy, Mikhail A; Carton, Xavier J

    2010-01-01

    In a two-layer quasi-geostrophic model, the interaction between two opposite-signed hetons (baroclinic vortex pairs) is studied analytically and numerically, for singular and finite-area vortices. For point vortices, using trilinear coordinates, it is shown that the possible evolutions depend on the deformation radius R d : for large R d , the layers decouple, vortices pair in each layer and their trajectories are open; for medium R d , the exchange of opposite-sign partners between layers becomes possible; for small R d , two other regimes appear: one where hetons remain unaltered during their evolution but follow open trajectories, and one where hetons occupy only a bounded subdomain of space at all times. Conditions for invariant co-rotation of the heton pair are derived and analyzed. Then, the nonlinear evolutions of finite-area heton pairs, with piecewise-constant vorticity, are computed with contour dynamics. When the central cyclonic vortex is initially aligned vertically, a transition occurs between three nonlinear regimes as layer coupling increases: for weak coupling, the vortices pair horizontally and drift away in opposite directions; for moderate layer coupling, the core vortex splits into two parts, one of which remains as a tilted columnar vortex at the center; for stronger layer coupling, each anticyclone pairs with part of the cyclone in each layer, thus forming an L-shaped dipole, a new coherent structure of two-layer flows. When the initial distance between the central and satellite vortices is increased, the velocity shear at the center decreases and the central vortex remains vertically aligned, thus forming a Z-shaped tripole, also a newly observed vortex compound. Such tripoles also compete with oscillating states, in which the core vortex periodically aligns and tilts, a regime observed when layer coupling is moderate and as vortices become closer in each layer. This Z-shaped tripole forms for various values of stratification and of initial

  7. Relationship between water quality and macro-scale parameters (land use, erosion, geology, and population density) in the Siminehrood River Basin.

    Science.gov (United States)

    Bostanmaneshrad, Farshid; Partani, Sadegh; Noori, Roohollah; Nachtnebel, Hans-Peter; Berndtsson, Ronny; Adamowski, Jan Franklin

    2018-10-15

    To date, few studies have investigated the simultaneous effects of macro-scale parameters (MSPs) such as land use, population density, geology, and erosion layers on micro-scale water quality variables (MSWQVs). This research focused on an evaluation of the relationship between MSPs and MSWQVs in the Siminehrood River Basin, Iran. In addition, we investigated the importance of water particle travel time (hydrological distance) on this relationship. The MSWQVs included 13 physicochemical and biochemical parameters observed at 15 stations during three seasons. Primary screening was performed by utilizing three multivariate statistical analyses (Pearson's correlation, cluster and discriminant analyses) in seven series of observed data. These series included three separate seasonal data, three two-season data, and aggregated three-season data for investigation of relationships between MSPs and MSWQVs. Coupled data (pairs of MSWQVs and MSPs) repeated in at least two out of three statistical analyses were selected for final screening. The primary screening results demonstrated significant relationships between land use and phosphorus, total solids and turbidity, erosion levels and electrical conductivity, and erosion and total solids. Furthermore, water particle travel time effects were considered through three geographical pattern definitions of distance for each MSP by using two weighting methods. To find effective MSP factors on MSWQVs, a multivariate linear regression analysis was employed. Then, preliminary equations that estimated MSWQVs were developed. The preliminary equations were modified to adaptive equations to obtain the final models. The final models indicated that a new metric, referred to as hydrological distance, provided better MSWQV estimation and water quality prediction compared to the National Sanitation Foundation Water Quality Index. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  8. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    Science.gov (United States)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  9. Macro-Scale Patterns in Upwelling/Downwelling Activity at North American West Coast.

    Directory of Open Access Journals (Sweden)

    Romeo Saldívar-Lucio

    Full Text Available The seasonal and interannual variability of vertical transport (upwelling/downwelling has been relatively well studied, mainly for the California Current System, including low-frequency changes and latitudinal heterogeneity. The aim of this work was to identify potentially predictable patterns in upwelling/downwelling activity along the North American west coast and discuss their plausible mechanisms. To this purpose we applied the min/max Autocorrelation Factor technique and time series analysis. We found that spatial co-variation of seawater vertical movements present three dominant low-frequency signals in the range of 33, 19 and 11 years, resembling periodicities of: atmospheric circulation, nodal moon tides and solar activity. Those periodicities might be related to the variability of vertical transport through their influence on dominant wind patterns, the position/intensity of pressure centers and the strength of atmospheric circulation cells (wind stress. The low-frequency signals identified in upwelling/downwelling are coherent with temporal patterns previously reported at the study region: sea surface temperature along the Pacific coast of North America, catch fluctuations of anchovy Engraulis mordax and sardine Sardinops sagax, the Pacific Decadal Oscillation, changes in abundance and distribution of salmon populations, and variations in the position and intensity of the Aleutian low. Since the vertical transport is an oceanographic process with strong biological relevance, the recognition of their spatio-temporal patterns might allow for some reasonable forecasting capacity, potentially useful for marine resources management of the region.

  10. Modeling PM10 gravimetric data from the Qalabotjha low-smoke fuels macro-scale experiment in South Africa

    International Nuclear Information System (INIS)

    Engelbrecht, J.P.; Swanepoel, L.; Zunckel, M.; Chow, J.C.

    1998-01-01

    D-grade domestic coal is being widely used for household cooking and heating purposes by the poorer urban communities in South Africa. The smoke from the combustion of coal has had a severe impact on the health of communities living in the rural townships and cities. To alleviate this escalating problem, the Department of Minerals and Energy of South Africa evaluated low-smoke fuels as an alternative source of energy. The technical and social implications of such fuels were investigated in the course of the Qalabotjha Low-Smoke Fuels Macro-Scale Experiment. Three low-smoke fuels (Chartech, African Fine Carbon (AFC) and Flame Africa) were tested in Qalabotjha over a 10 to 20 day period. This paper presents results from a PM10 TEOM continuous monitor at the Clinic site in Qalabotjha over the mentioned monitoring period. Both the fuel-type and the wind were found to have an effect on the air particulate concentrations. An exponential model which incorporates both these variables is proposed. This model allows for all measured particulate concentrations to be re-calculated to zero wind values. From the analysis of variance (ANOVA) calculations on the zero wind concentrations, it is concluded that the combustion of low-smoke fuels did make a significant improvement to the air quality in Qalabotjha over the period when these were used

  11. Meso-Scale Experimental & Numerical Studies for Predicting Macro-scale Performance of Advanced Reactive Materials (ARMs)

    Science.gov (United States)

    2015-04-01

    to full density at ~0.5 GPa, while the crush-up stress for flake -Ni+Al mixtures of similar size is ~2 GPa, and that of nano-sized Ni+Al was ~6 GPa...13 However, flake -Ni+Al mixtures reacted at the lowest threshold stress, due to the additional bending and buckling modes that cause the flake -Ni...volume, Sv, via the relationship, Sv=2PL. The aluminum powder compacts were affixed to the lapped end of a copper projectile (38.1mm high x 7.62mm

  12. Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.

    Science.gov (United States)

    Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric

    2016-04-01

    SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.

  13. Turbulence in Accretion Discs. The Global Baroclinic Instability

    Science.gov (United States)

    Klahr, Hubert; Bodenheimer, Peter

    The transport of angular momentum away from the central object is a sufficient condition for a protoplanetary disk to accrete matter onto the star and spin it down. Magnetic fields cannot be of importance for this process in a large part of the cold and dusty disk where the planets supposedly form. Our new hypothesis on the angular momentum transport based on radiation hydro simulations is as follows: We present the global baroclinic instability as a source for vigorous turbulence leading to angular momentum transport in Keplerian accretion disks. We show by analytical considerations and three-dimensional radiation hydro simulations that, in particular, protoplanetary disks have a negative radial entropy gradient, which makes them baroclinic. Two-dimensional numerical simulations show that this baroclinic flow is unstable and produces turbulence. These findings are currently tested for numerical effects by performing barotropic simulations which show that imposed turbulence rapidly decays. The turbulence in baroclinic disks draws energy from the background shear, transports angular momentum outward and creates a radially inward bound accretion of matter, thus forming a self consistent process. Gravitational energy is transformed into turbulent kinetic energy, which is then dissipated, as in the classical accretion paradigm. We measure accretion rates in 2D and 3D simulations of dot M= - 10-9 to -10-7 Msolar yr-1 and viscosity parameters of α = 10-4 - 10-2, which fit perfectly together and agree reasonably with observations. The turbulence creates pressure waves, Rossby waves, and vortices in the (r-φ) plane of the disk. We demonstrate in a global simulation that these vortices tend to form out of little background noise and to be long-lasting features, which have already been suggested to lead to the formation of planets.

  14. Three-Dimensional Dynamics of Baroclinic Tides Over a Seamount

    Science.gov (United States)

    Vlasenko, Vasiliy; Stashchuk, Nataliya; Nimmo-Smith, W. Alex M.

    2018-02-01

    The Massachusetts Institute of Technology general circulation model is used for the analysis of baroclinic tides over Anton Dohrn Seamount (ADS), in the North Atlantic. The model output is validated against in situ data collected during the 136th cruise of the RRS "James Cook" in May-June 2016. The observational data set includes velocity time series recorded at two moorings as well as temperature, salinity, and velocity profiles collected at 22 hydrological stations. Synthesis of observational and model data enabled the reconstruction of the details of baroclinic tidal dynamics over ADS. It was found that the baroclinic tidal waves are generated in the form of tidal beams radiating from the ADS periphery to its center, focusing tidal energy in a surface layer over the seamount's summit. This energy focusing enhances subsurface water mixing and the local generation of internal waves. The tidal beams interacting with the seasonal pycnocline generate short-scale internal waves radiating from the ADS center. An important ecological outcome from this study concerns the pattern of residual currents generated by tides. The rectified flows over ADS have the form of a pair of dipoles, cyclonic and anticyclonic eddies located at the seamount's periphery. These eddies are potentially an important factor in local larvae dispersion and their escape from ADS.

  15. Classroom Demonstrations Of Atmosphere-ocean Dynamics: Baroclinic Instability

    Science.gov (United States)

    Aurnou, Jonathan; Nadiga, B. T.

    2008-09-01

    Here we will present simple hands-on experimental demonstrations that show how baroclinic instabilities develop in rotating fluid dynamical systems. Such instabilities are found in the Earth's oceans and atmosphere as well as in the atmospheres and oceans of planetary bodies throughout the solar system and beyond. Our inexpensive experimental apparatus consists of a vinyl-record player, a wide shallow pan, and a weighted, dyed block of ice. Most directly, these demonstrations can be used to explain winter-time atmospheric weather patterns observed in Earth's mid-latitudes.

  16. Conservation laws in baroclinic inertial-symmetric instabilities

    Science.gov (United States)

    Grisouard, Nicolas; Fox, Morgan B.; Nijjer, Japinder

    2017-04-01

    Submesoscale oceanic density fronts are structures in geostrophic and hydrostatic balance, but are more prone to instabilities than mesoscale flows. As a consequence, they are believed to play a large role in air-sea exchanges, near-surface turbulence and dissipation of kinetic energy of geostrophically and hydrostatically balanced flows. We will present two-dimensional (x, z) Boussinesq numerical experiments of submesoscale baroclinic fronts on the f-plane. Instabilities of the mixed inertial and symmetric types (the actual name varies across the literature) develop, with the absence of along-front variations prohibiting geostrophic baroclinic instabilities. Two new salient facts emerge. First, contrary to pure inertial and/or pure symmetric instability, the potential energy budget is affected, the mixed instability extracting significant available potential energy from the front and dissipating it locally. Second, in the submesoscale regime, the growth rate of this mixed instability is sufficiently large that significant radiation of near-inertial internal waves occurs. Although energetically small compared to e.g. local dissipation within the front, this process might be a significant source of near-inertial energy in the ocean.

  17. Evidence of Multimodal Structure of the Baroclinic Tide in the Strait of Gibraltar

    National Research Council Canada - National Science Library

    Vazquez, A; Stashchuk, N; Vlasenko, V; Bruno, M; Izquierdo, A; Gallacher, P. C

    2006-01-01

    .... Analysis of the empirical orthogonal functions of the ADCP measurements performed over CS and model time series has shown that the second baroclinic mode predominates in the second type of internal wave. Its amplitude can reach one-third that of the first baroclinic mode of the leading waves of depression.

  18. The Anticipation of the ENSO: What Resonantly Forced Baroclinic Waves Can Teach Us (Part II

    Directory of Open Access Journals (Sweden)

    Jean-Louis Pinault

    2018-06-01

    Full Text Available The purpose of the paper is to take advantage of recent work on the study of resonantly forced baroclinic waves in the tropical Pacific to significantly reduce systematic and random forecasting errors resulting from the current statistical models intended to predict El Niño. Their major drawback is that sea surface temperature (SST, which is widely used, is very difficult to decipher because of the extreme complexity of exchanges at the ocean-atmosphere interface. In contrast, El Niño-Southern Oscillation (ENSO forecasting can be performed between 7 and 8 months in advance precisely and very simply from (1 the subsurface water temperature at particular locations and (2 the time lag of the events (their expected date of occurrence compared to a regular 4-year cycle. Discrimination of precursor signals from objective criteria prevents the anticipation of wrong events, as occurred in 2012 and 2014. The amplitude of the events, their date of appearance, as well as their potential impact on the involved regions are estimated. Three types of ENSO events characterize their climate impact according to whether they are (1 unlagged or weakly lagged, (2 strongly lagged, or (3 out of phase with the annual quasi-stationary wave (QSW (Central Pacific El Niño events. This substantial progress is based on the analysis of baroclinic QSWs in the tropical basin and the resulting genesis of ENSO events. As for cold events, the amplification of La Niña can be seen a few months before the maturation phase of an El Niño event, as occurred in 1998 and 2016.

  19. The Effect of Barotropic and Baroclinic Tides on Coastal Stratification and Mixing

    Science.gov (United States)

    Suanda, S. H.; Feddersen, F.; Kumar, N.

    2017-12-01

    The effects of barotropic and baroclinic tides on subtidal stratification and vertical mixing are examined with high-resolution, three-dimensional numerical simulations of the Central Californian coastal upwelling region. A base simulation with realistic atmospheric and regional-scale boundary forcing but no tides (NT) is compared to two simulations with the addition of predominantly barotropic local tides (LT) and with combined barotropic and remotely generated, baroclinic tides (WT) with ≈ 100 W m-1 onshore baroclinic energy flux. During a 10 day period of coastal upwelling when the domain volume-averaged temperature is similar in all three simulations, LT has little difference in subtidal temperature and stratification compared to NT. In contrast, the addition of remote baroclinic tides (WT) reduces the subtidal continental shelf stratification up to 50% relative to NT. Idealized simulations to isolate barotropic and baroclinic effects demonstrate that within a parameter space of typical U.S. West Coast continental shelf slopes, barotropic tidal currents, incident energy flux, and subtidal stratification, the dissipating baroclinic tide destroys stratification an order of magnitude faster than barotropic tides. In WT, the modeled vertical temperature diffusivity at the top (base) of the bottom (surface) boundary layer is increased up to 20 times relative to NT. Therefore, the width of the inner-shelf (region of surface and bottom boundary layer overlap) is increased approximately 4 times relative to NT. The change in stratification due to dissipating baroclinic tides is comparable to the magnitude of the observed seasonal cycle of stratification.

  20. A climatology based on reanalysis of baroclinic developmental regions in the extratropical northern hemisphere.

    Science.gov (United States)

    de la Torre, Laura; Nieto, Raquel; Noguerol, Marta; Añel, Juan Antonio; Gimeno, Luis

    2008-12-01

    Regions of the occurrence of different phenomena related to the development of baroclinic disturbances are reviewed for the Northern Hemisphere extratropics, using National Centers for Environmental Prediction/National Center for Atmospheric Research reanalysis data. The occurrence of height lows appears to be related to the orography near the earth's surface and with surface- and upper-air cyclogenesis in the upper troposphere. Over the cyclone tracks, the surface maxima appear to be trapped by land masses, whereas over the Mediterranean Sea they are located on the lee side of mountain ranges. The forcing terms of the geopotential tendency and omega equations mark the genesis (and, by the vorticity advection terms, the path) of the extratropical cyclones on the storm track. They occur mostly over the western coast of the oceans, beginning and having maxima on the lee side of the Rocky Mountains and the Tibetan Plateau. Their associated fronts form from the cold air coming from the continents and converging with the warm air over the Gulf and Kuroshio currents. Evident trends are found only for the Atlantic cyclone track (positive) and the Pacific cyclone track (negative) until the last decade when the tendency reverses. Over the southern Pacific, the number of fronts is lower during 1978-1997, coinciding with a period of strong El Niño Southern Oscillation episodes. This information is important for validating numerical models in order to predict changes associated with climate change and to study the behavior of extratropical cyclones and fronts.

  1. Process-oriented tests for validation of baroclinic shallow water models: The lock-exchange problem

    Science.gov (United States)

    Kolar, R. L.; Kibbey, T. C. G.; Szpilka, C. M.; Dresback, K. M.; Tromble, E. M.; Toohey, I. P.; Hoggan, J. L.; Atkinson, J. H.

    A first step often taken to validate prognostic baroclinic codes is a series of process-oriented tests, as those suggested by Haidvogel and Beckmann [Haidvogel, D., Beckmann, A., 1999. Numerical Ocean Circulation Modeling. Imperial College Press, London], among others. One of these tests is the so-called "lock-exchange" test or "dam break" problem, wherein water of different densities is separated by a vertical barrier, which is removed at time zero. Validation against these tests has primarily consisted of comparing the propagation speed of the wave front, as predicted by various theoretical and experimental results, to model output. In addition, inter-model comparisons of the lock-exchange test have been used to validate codes. Herein, we present a high resolution data set, taken from a laboratory-scale model, for direct and quantitative comparison of experimental and numerical results throughout the domain, not just the wave front. Data is captured every 0.2 s using high resolution digital photography, with salt concentration extracted by comparing pixel intensity of the dyed fluid against calibration standards. Two scenarios are discussed in this paper, symmetric and asymmetric mixing, depending on the proportion of dense/light water (17.5 ppt/0.0 ppt) in the experiment; the Boussinesq approximation applies to both. Front speeds, cast in terms of the dimensionless Froude number, show excellent agreement with literature-reported values. Data are also used to quantify the degree of mixing, as measured by the front thickness, which also provides an error band on the front speed. Finally, experimental results are used to validate baroclinic enhancements to the barotropic shallow water ADvanced CIRCulation (ADCIRC) model, including the effect of the vertical mixing scheme on simulation results. Based on salinity data, the model provides an average root-mean-square (rms) error of 3.43 ppt for the symmetric case and 3.74 ppt for the asymmetric case, most of which can

  2. Observation of baroclinic eddies southeast of Okinawa Island

    Institute of Scientific and Technical Information of China (English)

    PARK; Jae-Hun

    2008-01-01

    In the region southeast of Okinawa, during May to July 2001, a cyclonic and an anticyclonic eddy were observed from combined measurements of hydrocasts, an upward-looking moored acoustic Doppler current profiler (MADCP), pressure-recording inverted echo sounders (PIESs), satellite altimetry, and a coastal tide gauge. The hydrographic data showed that the lowest/highest temperature (T) and salinity (S) anomalies from a 13-year mean for the same season were respectively -3.0/+2.5℃ and -0.20/+0.15 psu at 380/500 dbar for the cyclonic/anticyclonic eddies. From the PIES data, using a gravest empirical mode method, we estimated time-varying surface dynamic height (D) anomaly referred to 2000 dbar changing from -20 to 30 cm, and time-varying T and S anomalies at 500 dbar ranging through about ±2 ℃ and ±0.2 psu, respectively. The passage of the eddies caused variations of both satellite-measured sea surface height anomaly (SSHA) and tide-gauge-measured sea level anomaly to change from about –20 to 30 cm, consistent with the D anomaly from the PIESs. Bottom pressure sensors measured no variation related to these eddy activities, which indicated that the two eddies were dominated by baro-clinicity. Time series of SSHA map confirmed that the two eddies, originating from the North Pacific Subtropical Countercurrent region near 20°―30°N and 150°―160°E, traveled about 3000 km for about 18 months with mean westward propagation speed of about 6 cm/s, before arriving at the region southeast of Okinawa Island.

  3. Monitoring and assessment of soil erosion at micro-scale and macro-scale in forests affected by fire damage in northern Iran.

    Science.gov (United States)

    Akbarzadeh, Ali; Ghorbani-Dashtaki, Shoja; Naderi-Khorasgani, Mehdi; Kerry, Ruth; Taghizadeh-Mehrjardi, Ruhollah

    2016-12-01

    Understanding the occurrence of erosion processes at large scales is very difficult without studying them at small scales. In this study, soil erosion parameters were investigated at micro-scale and macro-scale in forests in northern Iran. Surface erosion and some vegetation attributes were measured at the watershed scale in 30 parcels of land which were separated into 15 fire-affected (burned) forests and 15 original (unburned) forests adjacent to the burned sites. The soil erodibility factor and splash erosion were also determined at the micro-plot scale within each burned and unburned site. Furthermore, soil sampling and infiltration studies were carried out at 80 other sites, as well as the 30 burned and unburned sites, (a total of 110 points) to create a map of the soil erodibility factor at the regional scale. Maps of topography, rainfall, and cover-management were also determined for the study area. The maps of erosion risk and erosion risk potential were finally prepared for the study area using the Revised Universal Soil Loss Equation (RUSLE) procedure. Results indicated that destruction of the protective cover of forested areas by fire had significant effects on splash erosion and the soil erodibility factor at the micro-plot scale and also on surface erosion, erosion risk, and erosion risk potential at the watershed scale. Moreover, the results showed that correlation coefficients between different variables at the micro-plot and watershed scales were positive and significant. Finally, assessment and monitoring of the erosion maps at the regional scale showed that the central and western parts of the study area were more susceptible to erosion compared with the western regions due to more intense crop-management, greater soil erodibility, and more rainfall. The relationships between erosion parameters and the most important vegetation attributes were also used to provide models with equations that were specific to the study region. The results of this

  4. Evaluation of Micro- and Macro-Scale Petrophysical Characteristics of Lower Cretaceous Sandstone with Flow Modeling in µ-CT Imaged Geometry

    Science.gov (United States)

    Katsman, R.; Haruzi, P.; Waldmann, N.; Halisch, M.

    2017-12-01

    In this study petrophysical characteristics of rock samples from 3 successive outcrop layers of Hatira Formation Lower Cretaceous Sandstone in northen Israel were evaluated at micro- and macro-scales. The study was carried out by two complementary methods: using conventional experimental measurements of porosity, pore size distribution and permeability; and using a 3D µCT imaging and modeling of signle-phase flow in the real micro-scale sample geometry. The workfow included µ-CT scanning, image processing, image segmentation, and image analyses of pore network, followed by fluid flow simulations at a pore-scale. Upscaling the results of the micro-scale flow simulations yielded a macroscopic permeabilty tensor. Comparison of the upscaled and the experimentally measured rock properties demonstrated a reasonable agreement. In addition, geometrical (pore size distribution, surface area and tortuosity) and topological (Euler characteristic) characteristics of the grains and of the pore network were evaluated at a micro-scale. Statistical analyses of the samples for estimation of anisotropy and inhomogeneity of the porous media were conducted and the results agree with anisotropy and inhomogeneity of the upscaled permeabilty tensor. Isotropic pore orientation of the primary inter-granular porosity was identified in all three samples, whereas the characteristics of the secondary porosity were affected by precipitated cement and clay matrix within the primary pore network. Results of this study provide micro- and macro-scale characteristics of the Lower Cretaceous sandstone that is used in different places over the world as a reservoir for petroleum production and png;base64,R0lGODlhHAARAHcAMSH+GlNvZnR3YXJlOiBNaWNyb3NvZnQgT2ZmaWNlACH5BAEAAAAALAAABAAYAA0AhAAAAAAAAAAAOgAAZgA6kABmtjoAADoAZjo6kDqQ22YAAGa2/5A6AJA6ZpDb/7ZmALb//9uQOtv///+2Zv/bkP//tv//2wECAwECAwECAwECAwECAwECAwECAwECAwECAwVtICBaTGAWIkCaA5S+QKWgZCJSBgo8hASrjJ4osgDqABOB45dcwpopKIznmwpFkxas9uOmqDBZMawYxxS2iakn

  5. On the sensitivities of idealized moist baroclinic waves to environmental temperature and moist convection

    Science.gov (United States)

    Kirshbaum, Daniel; Merlis, Timothy; Gyakum, John; McTaggart-Cowan, Ron

    2017-04-01

    The impact of cloud diabatic heating on baroclinic life cycles has been studied for decades, with the nearly universal finding that this heating enhances the system growth rate. However, few if any studies have systematically addressed the sensitivity of baroclinic waves to environmental temperature. For a given relative humidity, warmer atmospheres contain more moisture than colder atmospheres. They also are more prone to the development of deep moist convection, which is itself a major source of diabatic heating. Thus, it is reasonable to expect faster baroclinic wave growth in warmer systems. To address this question, this study performs idealized simulations of moist baroclinic waves in a periodic channel, using initial environments with identical relative humidities, dry stabilities, and dry available potential energies but varying environmental temperatures and moist instabilities. While the dry versions of these simulations exhibit virtually identical wave growth, the moist versions exhibit major differences in life cycle. Counter-intuitively, despite slightly faster initial wave growth, the warmer and moister waves ultimately develop into weaker baroclinic systems with an earlier onset of the decay phase. An energetics analysis reveals that the reduced wave amplitude in the warmer cases stems from a reduced transfer of available potential energy into eddy potential energy. This reduced energy transfer is associated with an unfavorable phasing of mid-to-upper-level thermal and vorticity anomalies, which limits the meridional heat flux.

  6. Linearized potential vorticity mode and its role in transition to baroclinic instability

    International Nuclear Information System (INIS)

    Pieri, Alexandre; Salhi, Aziz; Cambon, Claude; Godeferd, Fabien

    2011-01-01

    Stratified shear flows have been studied using Rapid Distortion Theory (RDT) and DNS. If this flow is in addition subjected to vertical rotation, a slaved horizontal stratification is forced and baroclinic instability can occur. In this context, the RDT analysis shows an extention of the unstable domain up to a Richardson number Ri of 1. This work is completed here with new results on transition to baroclinic instability. Especially, the role of k x ≈ 0 modes (small streamwise wavenumbers) and the importance of coupling with the potential vorticity mode u (Ω pot ) is shown to be determinant for dramatic transient growth at intermediate times.

  7. Snow cover setting-up dates in the north of Eurasia: relations and feedback to the macro-scale atmospheric circulation

    Directory of Open Access Journals (Sweden)

    V. V. Popova

    2014-01-01

    Full Text Available Variations of snow cover onset data in 1950–2008 based on daily snow depth data collected at first-order meteorological stations of the former USSR compiled at the Russia Institute of Hydrometeorological Information are analyzed in order to reveal climatic norms, relations with macro-scale atmospheric circulation and influence of snow cover anomalies on strengthening/weakening of westerly basing on observational data and results of simulation using model Planet Simulator, as well. Patterns of mean snow cover setting-up data and their correlation with temperature of the Northern Hemisphere extra-tropical land presented in Fig. 1 show that the most sensible changes observed in last decade are caused by temperature trend since 1990th. For the most portion of the studied territory variations of snow cover setting-up data may be explained by the circulation indices in the terms of Northern Hemisphere Teleconnection Patterns: Scand, EA–WR, WP and NAO (Fig. 2. Role of the Scand and EA–WR (see Fig. 2, а, в, г is recognized as the most significant.Changes of snow cover extent calculated on the base of snow cover onset data over the Russia territory, and its western and eastern parts as well, for the second decade of October (Fig. 3 demonstrate significant difference in variability between eastern and western regions. Eastern part of territory essentially differs by lower both year-to-year and long-term variations in the contrast to the western part, characterized by high variance including long-term tendencies: increase in 1950–70th and decrease in 1970–80 and during last six years. Nevertheless relations between snow cover anomalies and Arctic Oscillation (AO index appear to be significant exceptionally for the eastern part of the territory. In the same time negative linear correlation revealed between snow extent and AO index changes during 1950–2008 from statistically insignificant values (in 1950–70 and 1996–2008 to coefficient

  8. An examination of extratropical cyclone response to changes in baroclinicity and temperature in an idealized environment

    Science.gov (United States)

    Tierney, Gregory; Posselt, Derek J.; Booth, James F.

    2018-02-01

    The dynamics and precipitation in extratropical cyclones (ETCs) are known to be sensitive to changes in the cyclone environment, with increases in bulk water vapor and baroclinicity both leading to increases in storm strength and precipitation. Studies that demonstrate this sensitivity have commonly varied either the cyclone moisture or baroclinicity, but seldom both. In a changing climate, in which the near-surface equator to pole temperature gradient may weaken while the bulk water vapor content of the atmosphere increases, it is important to understand the relative response of ETC strength and precipitation to changes in both factors simultaneously. In this study, idealized simulations of ETC development are conducted in a moist environment using a model with a full suite of moist physics parameterizations. The bulk temperature (and water vapor content) and baroclinicity are systematically varied one at a time, then simultaneously, and the effect of these variations on the storm strength and precipitation is assessed. ETC intensity exhibits the well-documented response to changes in baroclinicity, with stronger ETCs forming in higher baroclinicity environments. However, increasing water vapor content produces non-monotonic changes in storm strength, in which storm intensity first increases with increasing environmental water vapor, then decreases above a threshold value. Examination of the storm geographic extent indicates cyclone size also decreases above a threshold value of bulk environmental temperature (and water vapor). Decrease in storm size is concomitant with an increase in the convective fraction of precipitation and a shift in the vertical distribution of latent heating. The results indicate the existence of at least two regimes for ETC development, each of which exhibit significantly different distributions of PV due to differences in timing and location of convective heating.

  9. Some effects of horizontal discretization on linear baroclinic and symmetric instabilities

    Science.gov (United States)

    Barham, William; Bachman, Scott; Grooms, Ian

    2018-05-01

    The effects of horizontal discretization on linear baroclinic and symmetric instabilities are investigated by analyzing the behavior of the hydrostatic Eady problem in ocean models on the B and C grids. On the C grid a spurious baroclinic instability appears at small wavelengths. This instability does not disappear as the grid scale decreases; instead, it simply moves to smaller horizontal scales. The peak growth rate of the spurious instability is independent of the grid scale as the latter decreases. It is equal to cf /√{Ri} where Ri is the balanced Richardson number, f is the Coriolis parameter, and c is a nondimensional constant that depends on the Richardson number. As the Richardson number increases c increases towards an upper bound of approximately 1/2; for large Richardson numbers the spurious instability is faster than the Eady instability. To suppress the spurious instability it is recommended to use fourth-order centered tracer advection along with biharmonic viscosity and diffusion with coefficients (Δx) 4 f /(32√{Ri}) or larger where Δx is the grid scale. On the B grid, the growth rates of baroclinic and symmetric instabilities are too small, and converge upwards towards the correct values as the grid scale decreases; no spurious instabilities are observed. In B grid models at eddy-permitting resolution, the reduced growth rate of baroclinic instability may contribute to partially-resolved eddies being too weak. On the C grid the growth rate of symmetric instability is better (larger) than on the B grid, and converges upwards towards the correct value as the grid scale decreases.

  10. Assessing the vertical structure of baroclinic tidal currents in a global model

    Science.gov (United States)

    Timko, Patrick; Arbic, Brian; Scott, Robert

    2010-05-01

    Tidal forcing plays an important role in many aspects of oceanography. Mixing, transport of particulates and internal wave generation are just three examples of local phenomena that may depend on the strength of local tidal currents. Advances in satellite altimetry have made an assessment of the global barotropic tide possible. However, the vertical structure of the tide may only be observed by deployment of instruments throughout the water column. Typically these observations are conducted at pre-determined depths based upon the interest of the observer. The high cost of such observations often limits both the number and the length of the observations resulting in a limit to our knowledge of the vertical structure of tidal currents. One way to expand our insight into the baroclinic structure of the ocean is through the use of numerical models. We compare the vertical structure of the global baroclinic tidal velocities in 1/12 degree HYCOM (HYbrid Coordinate Ocean Model) to a global database of current meter records. The model output is a subset of a 5 year global simulation that resolves the eddying general circulation, barotropic tides and baroclinic tides using 32 vertical layers. The density structure within the simulation is both vertically and horizontally non-uniform. In addition to buoyancy forcing the model is forced by astronomical tides and winds. We estimate the dominant semi-diurnal (M2), and diurnal (K1) tidal constituents of the model data using classical harmonic analysis. In regions where current meter record coverage is adequate, the model skill in replicating the vertical structure of the dominant diurnal and semi-diurnal tidal currents is assessed based upon the strength, orientation and phase of the tidal ellipses. We also present a global estimate of the baroclinic tidal energy at fixed depths estimated from the model output.

  11. The effect of baroclinicity on the wind in the planetary boundary layer

    DEFF Research Database (Denmark)

    Floors, Rogier Ralph; Peña, Alfredo; Gryning, Sven-Erik

    2015-01-01

    close to zero and a standard deviation of approximately 3ms−1km−1. The geostrophic wind shear had a strong seasonal dependence because of temperature differences between land and sea. The mean wind profile in Hamburg, observed during an intensive campaign using radio sounding and during the whole year...... using the wind lidar, was influenced by baroclinicity. For easterly winds at Høvsøre, the estimated gradient wind decreased rapidly with height, resulting in a mean low-level jet. The turning of the wind in the boundary layer, the boundary-layer height and the empirical constants in the geostrophic drag...

  12. Dynamics of baroclinic wave pattern in transition zones between different flow regimes

    International Nuclear Information System (INIS)

    Larcher, Thomas von; Egbers, Christoph

    2005-01-01

    Baroclinic waves, both steady and time-dependent, are studied experimentally in a differentially heated rotating cylindrical gap with a free surface, cooled from within. Water is used as working fluid. We focus especially on transition zones between different flow regimes, where complex flow pattern like mixed-mode states are found. The transition from steady wave regime to irregular flow is also of particular interest. The surface flow is observed with visualisation techniques. Velocity time series are measured with the optical laser-Doppler-velocimetry technique. Thermographic measurements are applied for temperature field visualisations

  13. Diagnosis of balanced and unbalanced motions in a synoptic-scale baroclinic wave life cycle

    International Nuclear Information System (INIS)

    Bush, A.B.G.; Peltier, W.R.; McWilliams, J.C.

    1994-01-01

    For numerical simulations of large scale dynamics, balanced models are attractive because their governing equations preclude gravity waves and one is thereby free to use a larger time step than is possible with a model governed by the primitive equations. Recent comparative studies have proven the so-called balance equations to be the most accurate of the intermediate models. In this particular study, a new set of balance equations is derived for a three-dimensional anelastic primitive equation simulation of a synoptic-scale baroclinic wave. Results indicate that both forms of imbalance. slow higher-order corrections and fast gravity wave motions, arise in the simulation. Investigations into the origin of these gravity waves reveal that the frontal slope near the time of occlusion decreases in the lower 2 kilometers to a value beyond compatability with the vertical and horizontal resolution employed, and we conclude that the waves are numerically generated

  14. Low-order models of wave interactions in the transition to baroclinic chaos

    Directory of Open Access Journals (Sweden)

    W.-G. Früh

    1996-01-01

    Full Text Available A hierarchy of low-order models, based on the quasi-geostrophic two-layer model, is used to investigate complex multi-mode flows. The different models were used to study distinct types of nonlinear interactions, namely wave- wave interactions through resonant triads, and zonal flow-wave interactions. The coupling strength of individual triads is estimated using a phase locking probability density function. The flow of primary interest is a strongly modulated amplitude vacillation, whose modulation is coupled to intermittent bursts of weaker wave modes. This flow was found to emerge in a discontinuous bifurcation directly from a steady wave solution. Two mechanism were found to result in this flow, one involving resonant triads, and the other involving zonal flow-wave interactions together with a strong β-effect. The results will be compared with recent laboratory experiments of multi-mode baroclinic waves in a rotating annulus of fluid subjected to a horizontal temperature gradient.

  15. Atmospheric-like rotating annulus experiment: gravity wave emission from baroclinic jets

    Science.gov (United States)

    Rodda, Costanza; Borcia, Ion; Harlander, Uwe

    2017-04-01

    agreement for the large scale baroclinic wave regime. Moreover, in both cases a clear signal of horizontal divergence, embedded in the baroclinic wave front, appears suggesting IGWs emission.

  16. Scaling strength distributions in quasi-brittle materials from micro-to macro-scales: A computational approach to modeling Nature-inspired structural ceramics

    International Nuclear Information System (INIS)

    Genet, Martin; Couegnat, Guillaume; Tomsia, Antoni P.; Ritchie, Robert O.

    2014-01-01

    This paper presents an approach to predict the strength distribution of quasi-brittle materials across multiple length-scales, with emphasis on Nature-inspired ceramic structures. It permits the computation of the failure probability of any structure under any mechanical load, solely based on considerations of the microstructure and its failure properties by naturally incorporating the statistical and size-dependent aspects of failure. We overcome the intrinsic limitations of single periodic unit-based approaches by computing the successive failures of the material components and associated stress redistributions on arbitrary numbers of periodic units. For large size samples, the microscopic cells are replaced by a homogenized continuum with equivalent stochastic and damaged constitutive behavior. After establishing the predictive capabilities of the method, and illustrating its potential relevance to several engineering problems, we employ it in the study of the shape and scaling of strength distributions across differing length-scales for a particular quasi-brittle system. We find that the strength distributions display a Weibull form for samples of size approaching the periodic unit; however, these distributions become closer to normal with further increase in sample size before finally reverting to a Weibull form for macroscopic sized samples. In terms of scaling, we find that the weakest link scaling applies only to microscopic, and not macroscopic scale, samples. These findings are discussed in relation to failure patterns computed at different size-scales. (authors)

  17. Estimates of the Attenuation Rates of Baroclinic Tidal Energy Caused by Resonant Interactions Among Internal Waves based on the Weak Turbulence Theory

    Science.gov (United States)

    Onuki, Y.; Hibiya, T.

    2016-02-01

    The baroclinic tides are thought to be the dominant energy source for turbulent mixing in the ocean interior. In contrast to the geography of the energy conversion rates from the barotropic to baroclinic tides, which has been clarified in recent numerical studies, the global distribution of the energy sink for the resulting low-mode baroclinic tides remains obscure. A key to resolve this issue is the resonant wave-wave interactions, which transfer part of the baroclinic tidal energy to the background internal wave field enhancing the local energy dissipation rates. Recent field observations and numerical studies have pointed out that parametric subharmonic instability (PSI), one of the resonant interactions, causes significant energy sink of baroclinic tidal energy at mid-latitudes. The purpose of this study is to analyze the quantitative aspect of PSI to demonstrate the global distribution of the intensity of resonant wave interactions, namely, the attenuation rate of low-mode baroclinic tidal energy. Our approach is basically following the weak turbulence theory, which is the standard theory for resonant wave-wave interactions, where techniques of singular perturbation and statistical physics are employed. This study is, however, different from the classical theory in some points; we have reformulated the weak turbulence theory to be applicable to low-mode internal waves and also developed its numerical calculation method so that the effects of stratification profile and oceanic total depth can be taken into account. We have calculated the attenuation rate of low-mode baroclinic tidal waves interacting with the background Garrett-Munk internal wave field. The calculated results clearly show the rapid attenuation of baroclinic tidal energy at mid-latitudes, in agreement with the results from field observations and also show the zonal inhomogeneity of the attenuation rate caused by the density structures associated with the subtropical gyre. This study is expected

  18. Baroclinic wave configurations evolution at European scale in the period 1948-2013

    Science.gov (United States)

    Carbunaru, Daniel; Burcea, Sorin; Carbunaru, Felicia

    2016-04-01

    The main aim of the study was to investigate the dynamic characteristics of synoptic configurations at European scale and especially in south-eastern part of Europe for the period 1948-2013. Using the empirical orthogonal functions analysis, simultaneously applied to daily average geopotential field at different pressure levels (200 hPa, 300 hPa, 500 hPa and 850 hPa) during warm (April-September) and cold (October-March) seasons, on a synoptic spatial domain centered on Europe (-27.5o lon V to 45o lon E and 32.5o lat N to 72.5o lat N), the main mode of oscillation characteristic to vertical shift of mean baroclinic waves was obtained. The analysis independently applied on 66 years showed that the first eigenvectors in warms periods describe about 60% of the data and in cold season 40% of the data for each year. In comparison secondary eigenvectors describe up to 20% and 10% of the data. Thus, the analysis was focused on the complex evolution of the first eigenvector in 66 years, during the summer period. On average, this eigenvector describes a small vertical phase shift in the west part of the domain and a large one in the eastern part. Because the spatial extent of the considered synoptic domain incorporates in the west part AMO (Atlantic Multidecadal Oscillation) and NAO (North Atlantic Oscillation) oscillations, and in the north part being sensitive to AO (Arctic Oscillation) oscillation, these three oscillations were considered as modulating dynamic factors at hemispherical scale. The preliminary results show that in the summer seasons AMO and NAO oscillations modulated vertical phase shift of baroclinic wave in the west of the area (Northwestern Europe), and the relationship between AO and NAO oscillations modulated vertical phase shift in the southeast area (Southeast Europe). Second, it was shown the way in which this vertical phase shift modulates the overall behavior of cyclonic activity, particularly in Southeastern Europe. This work has been developed

  19. Macro scale models for freight railroad terminals.

    Science.gov (United States)

    2016-03-02

    The project has developed a yard capacity model for macro-level analysis. The study considers the detailed sequence and scheduling in classification yards and their impacts on yard capacities simulate typical freight railroad terminals, and statistic...

  20. Variational energy principle for compressible, baroclinic flow. 2: Free-energy form of Hamilton's principle

    Science.gov (United States)

    Schmid, L. A.

    1977-01-01

    The first and second variations are calculated for the irreducible form of Hamilton's Principle that involves the minimum number of dependent variables necessary to describe the kinetmatics and thermodynamics of inviscid, compressible, baroclinic flow in a specified gravitational field. The form of the second variation shows that, in the neighborhood of a stationary point that corresponds to physically stable flow, the action integral is a complex saddle surface in parameter space. There exists a form of Hamilton's Principle for which a direct solution of a flow problem is possible. This second form is related to the first by a Friedrichs transformation of the thermodynamic variables. This introduces an extra dependent variable, but the first and second variations are shown to have direct physical significance, namely they are equal to the free energy of fluctuations about the equilibrium flow that satisfies the equations of motion. If this equilibrium flow is physically stable, and if a very weak second order integral constraint on the correlation between the fluctuations of otherwise independent variables is satisfied, then the second variation of the action integral for this free energy form of Hamilton's Principle is positive-definite, so the action integral is a minimum, and can serve as the basis for a direct trail and error solution. The second order integral constraint states that the unavailable energy must be maximum at equilibrium, i.e. the fluctuations must be so correlated as to produce a second order decrease in the total unavailable energy.

  1. Impacts of Wind Stress Changes on the Global Heat Transport, Baroclinic Instability, and the Thermohaline Circulation

    Directory of Open Access Journals (Sweden)

    Jeferson Prietsch Machado

    2016-01-01

    Full Text Available The wind stress is a measure of momentum transfer due to the relative motion between the atmosphere and the ocean. This study aims to investigate the anomalous pattern of atmospheric and oceanic circulations due to 50% increase in the wind stress over the equatorial region and the Southern Ocean. In this paper we use a coupled climate model of intermediate complexity (SPEEDO. The results show that the intensification of equatorial wind stress causes a decrease in sea surface temperature in the tropical region due to increased upwelling and evaporative cooling. On the other hand, the intensification of wind stress over the Southern Ocean induces a regional increase in the air and sea surface temperatures which in turn leads to a reduction in Antarctic sea ice thickness. This occurs in association with changes in the global thermohaline circulation strengthening the rate of Antarctic Bottom Water formation and a weakening of the North Atlantic Deep Water. Moreover, changes in the Southern Hemisphere thermal gradient lead to modified atmospheric and oceanic heat transports reducing the storm tracks and baroclinic activity.

  2. Future changes in extratropical storm tracks and baroclinicity under climate change

    International Nuclear Information System (INIS)

    Lehmann, Jascha; Coumou, Dim; Frieler, Katja; Eliseev, Alexey V; Levermann, Anders

    2014-01-01

    The weather in Eurasia, Australia, and North and South America is largely controlled by the strength and position of extratropical storm tracks. Future climate change will likely affect these storm tracks and the associated transport of energy, momentum, and water vapour. Many recent studies have analyzed how storm tracks will change under climate change, and how these changes are related to atmospheric dynamics. However, there are still discrepancies between different studies on how storm tracks will change under future climate scenarios. Here, we show that under global warming the CMIP5 ensemble of coupled climate models projects only little relative changes in vertically averaged mid-latitude mean storm track activity during the northern winter, but agree in projecting a substantial decrease during summer. Seasonal changes in the Southern Hemisphere show the opposite behaviour, with an intensification in winter and no change during summer. These distinct seasonal changes in northern summer and southern winter storm tracks lead to an amplified seasonal cycle in a future climate. Similar changes are seen in the mid-latitude mean Eady growth rate maximum, a measure that combines changes in vertical shear and static stability based on baroclinic instability theory. Regression analysis between changes in the storm tracks and changes in the maximum Eady growth rate reveal that most models agree in a positive association between the two quantities over mid-latitude regions. (letter)

  3. The effect of topography on the evolution of unstable disturbances in a baroclinic atmosphere

    Science.gov (United States)

    Clark, J. H. E.

    1985-01-01

    A two layer spectral quasi-geostrophic model is used to simulate the effects of topography on the equilibria, their stability, and the long term evolution of incipient unstable waves. The flow is forced by latitudinally dependent radiative heating. Dissipation is in the form of Rayleigh friction. An analytical solution is found for the propagating finite amplitude waves which result from baroclinic instability of the zonal winds when topography is absent. The appearance of this solution for wavelengths just longer than the Rossby radius of deformation and disappearance of ultra-long wavelengths is interpreted in terms of the Hopf bifurcation theory. Simple dynamic and thermodynamic criteria for the existence of periodic Rossby solutions are presented. A Floquet stability analysis shows that the waves are neutral. The nature of the form drag instability of high index equilibria is investigated. The proximity of the equilibrium shear to a resonant value is essential for the instability, provided the equilibrium occurs at a slightly stronger shear than resonance.

  4. Cross-Scale Baroclinic Simulation of the Effect of Channel Dredging in an Estuarine Setting

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2018-02-01

    Full Text Available Holistic simulation approaches are often required to assess human impacts on a river-estuary-coastal system, due to the intrinsically linked processes of contrasting spatial scales. In this paper, a Semi-implicit Cross-scale Hydroscience Integrated System Model (SCHISM is applied in quantifying the impact of a proposed hydraulic engineering project on the estuarine hydrodynamics. The project involves channel dredging and land expansion that traverse several spatial scales on an ocean-estuary-river-tributary axis. SCHISM is suitable for this undertaking due to its flexible horizontal and vertical grid design and, more importantly, its efficient high-order implicit schemes applied in both the momentum and transport calculations. These techniques and their advantages are briefly described along with the model setup. The model features a mixed horizontal grid with quadrangles following the shipping channels and triangles resolving complex geometries elsewhere. The grid resolution ranges from ~6.3 km in the coastal ocean to 15 m in the project area. Even with this kind of extreme scale contrast, the baroclinic model still runs stably and accurately at a time step of 2 min, courtesy of the implicit schemes. We highlight that the implicit transport solver alone reduces the total computational cost by 82%, as compared to its explicit counterpart. The base model is shown to be well calibrated, then it is applied in simulating the proposed project scenario. The project-induced modifications on salinity intrusion, gravitational circulation, and transient events are quantified and analyzed.

  5. Three-Dimensional Coupled NLS Equations for Envelope Gravity Solitary Waves in Baroclinic Atmosphere and Modulational Instability

    Directory of Open Access Journals (Sweden)

    Baojun Zhao

    2018-01-01

    Full Text Available Envelope gravity solitary waves are an important research hot spot in the field of solitary wave. And the weakly nonlinear model equations system is a part of the research of envelope gravity solitary waves. Because of the lack of technology and theory, previous studies tried hard to reduce the variable numbers and constructed the two-dimensional model in barotropic atmosphere and could only describe the propagation feature in a direction. But for the propagation of envelope gravity solitary waves in real ocean ridges and atmospheric mountains, the three-dimensional model is more appropriate. Meanwhile, the baroclinic problem of atmosphere is also an inevitable topic. In the paper, the three-dimensional coupled nonlinear Schrödinger (CNLS equations are presented to describe the evolution of envelope gravity solitary waves in baroclinic atmosphere, which are derived from the basic dynamic equations by employing perturbation and multiscale methods. The model overcomes two disadvantages: (1 baroclinic problem and (2 propagation path problem. Then, based on trial function method, we deduce the solution of the CNLS equations. Finally, modulational instability of wave trains is also discussed.

  6. Double-diffusive convection and baroclinic instability in a differentially heated and initially stratified rotating system: the barostrat instability

    Energy Technology Data Exchange (ETDEWEB)

    Vincze, Miklos; Borcia, Ion; Harlander, Uwe [Department of Aerodynamics and Fluid Mechanics, Brandenburg University of Technology (BTU) Cottbus-Senftenberg, Siemens-Halske-Ring 14, D-03046 Cottbus (Germany); Gal, Patrice Le, E-mail: vincze.m@lecso.elte.hu [Institut de Recherche sur les Phénomènes Hors Equilibre, CNRS—Aix-Marseille University—Ecole Centrale Marseille, 49 rue F. Joliot-Curie, F-13384 Marseille (France)

    2016-12-15

    A water-filled differentially heated rotating annulus with initially prepared stable vertical salinity profiles is studied in the laboratory. Based on two-dimensional horizontal particle image velocimetry data and infrared camera visualizations, we describe the appearance and the characteristics of the baroclinic instability in this original configuration. First, we show that when the salinity profile is linear and confined between two non-stratified layers at top and bottom, only two separate shallow fluid layers can be destabilized. These unstable layers appear nearby the top and the bottom of the tank with a stratified motionless zone between them. This laboratory arrangement is thus particularly interesting to model geophysical or astrophysical situations where stratified regions are often juxtaposed to convective ones. Then, for more general but stable initial density profiles, statistical measures are introduced to quantify the extent of the baroclinic instability at given depths and to analyze the connections between this depth-dependence and the vertical salinity profiles. We find that, although the presence of stable stratification generally hinders full-depth overturning, double-diffusive convection can lead to development of multicellular sideways convection in shallow layers and subsequently to a multilayered baroclinic instability. Therefore we conclude that by decreasing the characteristic vertical scale of the flow, stratification may even enhance the formation of cyclonic and anticyclonic eddies (and thus, mixing) in a local sense. (paper)

  7. Double-diffusive convection and baroclinic instability in a differentially heated and initially stratified rotating system: the barostrat instability

    International Nuclear Information System (INIS)

    Vincze, Miklos; Borcia, Ion; Harlander, Uwe; Gal, Patrice Le

    2016-01-01

    A water-filled differentially heated rotating annulus with initially prepared stable vertical salinity profiles is studied in the laboratory. Based on two-dimensional horizontal particle image velocimetry data and infrared camera visualizations, we describe the appearance and the characteristics of the baroclinic instability in this original configuration. First, we show that when the salinity profile is linear and confined between two non-stratified layers at top and bottom, only two separate shallow fluid layers can be destabilized. These unstable layers appear nearby the top and the bottom of the tank with a stratified motionless zone between them. This laboratory arrangement is thus particularly interesting to model geophysical or astrophysical situations where stratified regions are often juxtaposed to convective ones. Then, for more general but stable initial density profiles, statistical measures are introduced to quantify the extent of the baroclinic instability at given depths and to analyze the connections between this depth-dependence and the vertical salinity profiles. We find that, although the presence of stable stratification generally hinders full-depth overturning, double-diffusive convection can lead to development of multicellular sideways convection in shallow layers and subsequently to a multilayered baroclinic instability. Therefore we conclude that by decreasing the characteristic vertical scale of the flow, stratification may even enhance the formation of cyclonic and anticyclonic eddies (and thus, mixing) in a local sense. (paper)

  8. Macro-scale turbulence modelling for flows in porous media; Modelisation a l'echelle macroscopique d'un ecoulement turbulent au sein d'un milieu poreux

    Energy Technology Data Exchange (ETDEWEB)

    Pinson, F

    2006-03-15

    - This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - {epsilon} RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - <{epsilon}>f - <{epsilon}{sub w}>f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then

  9. Dynamic Transitions and Baroclinic Instability for 3D Continuously Stratified Boussinesq Flows

    Science.gov (United States)

    Şengül, Taylan; Wang, Shouhong

    2018-02-01

    The main objective of this article is to study the nonlinear stability and dynamic transitions of the basic (zonal) shear flows for the three-dimensional continuously stratified rotating Boussinesq model. The model equations are fundamental equations in geophysical fluid dynamics, and dynamics associated with their basic zonal shear flows play a crucial role in understanding many important geophysical fluid dynamical processes, such as the meridional overturning oceanic circulation and the geophysical baroclinic instability. In this paper, first we derive a threshold for the energy stability of the basic shear flow, and obtain a criterion for local nonlinear stability in terms of the critical horizontal wavenumbers and the system parameters such as the Froude number, the Rossby number, the Prandtl number and the strength of the shear flow. Next, we demonstrate that the system always undergoes a dynamic transition from the basic shear flow to either a spatiotemporal oscillatory pattern or circle of steady states, as the shear strength of the basic flow crosses a critical threshold. Also, we show that the dynamic transition can be either continuous or catastrophic, and is dictated by the sign of a transition number, fully characterizing the nonlinear interactions of different modes. Both the critical shear strength and the transition number are functions of the system parameters. A systematic numerical method is carried out to explore transition in different flow parameter regimes. In particular, our numerical investigations show the existence of a hypersurface which separates the parameter space into regions where the basic shear flow is stable and unstable. Numerical investigations also yield that the selection of horizontal wave indices is determined only by the aspect ratio of the box. We find that the system admits only critical eigenmodes with roll patterns aligned with the x-axis. Furthermore, numerically we encountered continuous transitions to multiple

  10. The tropopause inversion layer in baroclinic life-cycle experiments: the role of diabatic processes

    Directory of Open Access Journals (Sweden)

    D. Kunkel

    2016-01-01

    Full Text Available Recent studies on the formation of a quasi-permanent layer of enhanced static stability above the thermal tropopause revealed the contributions of dynamical and radiative processes. Dry dynamics leads to the evolution of a tropopause inversion layer (TIL, which is, however, too weak compared to observations and thus diabatic contributions are required. In this study we aim to assess the importance of diabatic processes in the understanding of TIL formation at midlatitudes. The non-hydrostatic model COSMO (COnsortium for Small-scale MOdelling is applied in an idealized midlatitude channel configuration to simulate baroclinic life cycles. The effect of individual diabatic processes related to humidity, radiation, and turbulence is studied first to estimate the contribution of each of these processes to the TIL formation in addition to dry dynamics. In a second step these processes are stepwise included in the model to increase the complexity and finally estimate the relative importance of each process. The results suggest that including turbulence leads to a weaker TIL than in a dry reference simulation. In contrast, the TIL evolves stronger when radiation is included but the temporal evolution is still comparable to the reference. Using various cloud schemes in the model shows that latent heat release and consecutive increased vertical motions foster an earlier and stronger appearance of the TIL than in all other life cycles. Furthermore, updrafts moisten the upper troposphere and as such increase the radiative effect from water vapor. Particularly, this process becomes more relevant for maintaining the TIL during later stages of the life cycles. Increased convergence of the vertical wind induced by updrafts and by propagating inertia-gravity waves, which potentially dissipate, further contributes to the enhanced stability of the lower stratosphere. Finally, radiative feedback of ice clouds reaching up to the tropopause is identified to

  11. Hydrological and dynamical characterization of Meddies in the Azores region: A paradigm for baroclinic vortex dynamics

    Science.gov (United States)

    Tychensky, A.; Carton, X.

    1998-10-01

    The Structure des Echanges Mer-Atmosphere, Proprietes des Heterogeneites Oceaniques: Recherche Expérimentale (SEMAPHORE) oceanographic experiment surveyed a 500 × 500 km2 domain south of the Azores from June to November 1993 and collected hydrological data, float trajectories, and current meter recordings. This data exhibited three intrathermocline eddies of Mediterranean water (Meddies), two of them being repeatedly sampled. Their hydrological and dynamical properties are quantified here by an isopycnic analysis. For the three Meddies, intense temperature and salinity anomalies (up to 4°C and 1.1 practical salinity units (psu)) are observed extending vertically over up to 1000 m and centered around 1000 m. Horizontally, these anomalies spread out to radii of 50-60 km, while the maximum azimuthal velocities (30 cm s-1, as computed by geostrophy) lie only at 35-40 km from the central axis. These Meddies followed curved trajectories, with drift velocities up to 7.5 cm s-1, under the influence of the neighboring mesoscale features (cyclonic vortices or Azores Current meanders). The three-dimensional structure of potential vorticity in and around these features evidences their complex interactions. Northwest of the domain, a Meddy was coupled to a subsurface anticyclone, forming an "aligned" vortex. It later interacted with the Azores Current, creating a large-amplitude northward meander by vertical alignment of vorticity. In the southeastern part of the domain, another Meddy was vertically aligned with an anticyclonic meander of the Azores Current and horizontally coupled with a cyclone of large vertical extent. These two features, as well as a small warm and salty fragment in their vicinity, seem to result from the southward crossing of the Meddy under the Azores Current. These observations illustrate previous theoretical studies of baroclinic vortex dynamics.

  12. Vertical Transport of Momentum by the Inertial-Gravity Internal Waves in a Baroclinic Current

    Directory of Open Access Journals (Sweden)

    A. A. Slepyshev

    2017-08-01

    Full Text Available When the internal waves break, they are one of the sources of small-scale turbulence. Small-scale turbulence causes the vertical exchange in the ocean. However, internal waves with regard to the Earth rotation in the presence of vertically inhomogeneous two-dimensional current are able to contribute to the vertical transport. Free inertial-gravity internal waves in a baroclinic current in a boundless basin of a constant depth are considered in the Bussinesq approximation. Boundary value problem of linear approximation for the vertical velocity amplitude of internal waves has complex coefficients when current velocity component, which is transversal to the wave propagation direction, depends on the vertical coordinate (taking into account the rotation of the Earth. Eigenfunction and wave frequency are complex, and it is shown that a weak wave damping takes place. Dispersive relation and wave damping decrement are calculated in the linear approximation. At a fixed wave number damping decrement of the second mode is larger (in the absolute value than the one of the first mode. The equation for vertical velocity amplitude for real profiles of the Brunt – Vaisala frequency and current velocity are numerically solved according to implicit Adams scheme of the third order of accuracy. The dispersive curves of the first two modes do not reach inertial frequency in the low-frequency area due to the effect of critical layers in which wave frequency of the Doppler shift is equal to the inertial one. Termination of the second mode dispersive curves takes place at higher frequency than the one of the first mode. In the second order of the wave amplitude the Stokes drift speed is determined. It is shown that the Stokes drift speed, which is transversal to the wave propagation direction, differs from zero if the transversal component of current velocity depends on the vertical coordinate. In this case, the Stokes drift speed in the second mode is lower than

  13. Baroclinic Instability in the Solar Tachocline for Continuous Vertical Profiles of Rotation, Effective Gravity, and Toroidal Field

    Energy Technology Data Exchange (ETDEWEB)

    Gilman, Peter A., E-mail: gilman@ucar.edu [High Altitude Observatory, National Center for Atmospheric Research, 3080 Center Green, Boulder, CO 80307-3000 (United States)

    2017-06-20

    We present results from an MHD model for baroclinic instability in the solar tachocline that includes rotation, effective gravity, and toroidal field that vary continuously with height. We solve the perturbation equations using a shooting method. Without toroidal fields but with an effective gravity declining linearly from a maximum at the bottom to much smaller values at the top, we find instability at all latitudes except at the poles, at the equator, and where the vertical rotation gradient vanishes (32.°3) for longitude wavenumbers m from 1 to >10. High latitudes are much more unstable than low latitudes, but both have e -folding times that are much shorter than a sunspot cycle. The higher the m and the steeper the decline in effective gravity, the closer the unstable mode peak to the top boundary, where the energy available to drive instability is greatest. The effect of the toroidal field is always stabilizing, shrinking the latitude ranges of instability as the toroidal field is increased. The larger the toroidal field, the smaller the longitudinal wavenumber of the most unstable disturbance. All latitudes become stable for a toroidal field exceeding about 4 kG. The results imply that baroclinic instability should occur in the tachocline at latitudes where the toroidal field is weak or is changing sign, but not where the field is strong.

  14. Baroclinic flows, transports, and kinematic properties in a cyclonic-anticyclonic-cyclonic ring triad in the Gulf of Mexico

    Science.gov (United States)

    Vidal, VíCtor M. V.; Vidal, Francisco V.; HernáNdez, Abel F.; Meza, Eustorgio; PéRez-Molero, José M.

    1994-04-01

    During October-November 1986 the baroclinic circulation of the central and western Gulf of Mexico was dominated by an anticyclonic ring that was being bisected by two north and south flanking cyclonic rings. The baroclinic circulation revealed a well-defined cyclonic-anticyclonic-cyclonic triad system. The anticyclone's collision against the western gulf continental slope at 22.5°N, 97°W originated the north and south flanking cyclonic rings. The weakening of the anticyclone's relative vorticity, during the collision, was compensated by along-shelf north (26 cm s-1) and south (58 cm s-1) jet currents and by the anticyclone's flanking water mass's gain of cyclonic vorticity from lateral shear contributed by east (56 cm s-1) and west (42 cm s-1) current jets with individual mass transports of ˜18 Sv. Within the 0-1000 and 0-500 dbar layers and across 96°W the magnitudes of the colliding westward transports were 17.80 and 8.59 Sv, respectively. These corresponding transports were 85 and 94% balanced by along-shelf jet currents north and south of the anticyclone's collision zone. This indicates that only minor amounts (energy from the upper to the deeper water layers. Our vertical transport estimates through the 1000-m-depth surface revealed a net vertical descending transport of 0.4 Sv for the ring triad system. This mass flux occurred primordially within the south central gulf region and most likely constituted a principal mechanism that propelled the gulf's deep horizontal circulation. The volume renewal time is ˜5 years for the ring triad system within 0-1000 dbar. The volume renewal time for the gulf's deep water layer (2000-3000 dbar), estimated as a function of its horizontal outflowing mass flux (1.96 Sv), is of the same order of magnitude and reveals that the deeper layer of the Gulf of Mexico is as well ventilated as its upper layer (0-1000 dbar). The ring triad's surface kinematic properties were derived from the sea surface baroclinic circulation field

  15. A Cross-Scale Model for 3D Baroclinic Circulation in Estuary-Plume-Shelf Systems. 2. Application to the Columbia River

    National Research Council Canada - National Science Library

    Baptista, Antonio M; Zhang, Yinglong; Chawla, Arun; Zulauf, Mike; Seaton, Charles; Myers, III, Edward P; Kindle, John; Wilkin, Michael; Burla, Michaela; Turner, Paul J

    2005-01-01

    This article is the second of a two-part paper on ELCIRC, an Eulerian-Lagrangian finite difference/finite volume model designed to simulate 3D baroclinic circulation across river-to-ocean scales. In part one (Zhang et al., 2004...

  16. Baroclinic instability of a symmetric, rotating, stratified flow: a study of the nonlinear stabilisation mechanisms in the presence of viscosity

    Directory of Open Access Journals (Sweden)

    R. Mantovani

    2002-01-01

    Full Text Available This paper presents the analysis of symmetric circulations of a rotating baroclinic flow, forced by a steady thermal wind and dissipated by Laplacian friction. The analysis is performed with numerical time-integration. Symmetric flows, vertically bound by horizontal walls and subject to either periodic or vertical wall lateral boundary conditions, are investigated in the region of parameter-space where unstable small amplitude modes evolve into stable stationary nonlinear solutions. The distribution of solutions in parameter-space is analysed up to the threshold of chaotic behaviour and the physical nature of the nonlinear interaction operating on the finite amplitude unstable modes is investigated. In particular, analysis of time-dependent energy-conversions allows understanding of the physical mechanisms operating from the initial phase of linear instability to the finite amplitude stable state. Vertical shear of the basic flow is shown to play a direct role in injecting energy into symmetric flow since the stage of linear growth. Dissipation proves essential not only in limiting the energy of linearly unstable modes, but also in selecting their dominant space-scales in the finite amplitude stage.

  17. Rotational Baroclinic Adjustment

    DEFF Research Database (Denmark)

    Holtegård Nielsen, Steen Morten

    to produce coastal currents flowing cyclonically through the Kattegat.Off the headland Skagen, the lightvessel observations together with earlier studies suggest that strong wind-driven currents are responsible for the location of the Kattegat/Skagerrak frontin this area.Observations from the interior...

  18. Coupled hygrothermal, electrochemical, and mechanical modelling for deterioration prediction in reinforced cementitious materials

    DEFF Research Database (Denmark)

    Michel, Alexander; Geiker, Mette Rica; Lepech, M.

    2017-01-01

    In this paper a coupled hygrothermal, electrochemical, and mechanical modelling approach for the deterioration prediction in cementitious materials is briefly outlined. Deterioration prediction is thereby based on coupled modelling of (i) chemical processes including among others transport of hea......, i.e. information, such as such as corrosion current density, damage state of concrete cover, etc., are constantly exchanged between the models....... and matter as well as phase assemblage on the nano and micro scale, (ii) corrosion of steel including electrochemical processes at the reinforcement surface, and (iii) material performance including corrosion- and load-induced damages on the meso and macro scale. The individual FEM models are fully coupled...

  19. Nonlinear Phenomena in Complex Systems: From Nano to Macro Scale

    CERN Document Server

    Stanley, H

    2014-01-01

    Topics of complex system physics and their interdisciplinary applications to different problems in seismology, biology, economy, sociology,  energy and nanotechnology are covered in this new work from renowned experts in their fields.  In  particular, contributed papers contain original results on network science, earthquake dynamics, econophysics, sociophysics, nanoscience and biological physics. Most of the papers use interdisciplinary approaches based on statistical physics, quantum physics and other topics of complex system physics.  Papers on econophysics and sociophysics are focussed on societal aspects of physics such as, opinion dynamics, public debates and financial and economic stability. This work will be of interest to statistical physicists, economists, biologists, seismologists and all scientists working in interdisciplinary topics of complexity.

  20. The Use of Pre-Storm Boundary-Layer Baroclinicity in Determining and Operationally Implementing the Atlantic Surface Cyclone Intensification Index

    Science.gov (United States)

    Cione, Joseph; Pietrafes, Leonard J.

    The lateral motion of the Gulf Stream off the eastern seaboard of the United States during the winter season can act to dramatically enhance the low-level baroclinicity within the coastal zone during periods of offshore cold advection. The ralative close proximity of the Gulf Stream current off the mid-Atlantic coast can result in the rapid and intense destabilization of the marine atmospheric boundary layer directly above and shoreward of the Gulf Stream within this region. This airmass modification period often precedes either wintertime coastal cyclogenesis or the cyclonic re-development of existing mid-latitude cyclones. A climatological study investigating the relationship between the severity of the pre-storm, cold advection period and subsequent cyclogenic intensification was undertaken by Cione et al. in 1993. Findings from this study illustrate that the thermal structure of the continental airmass as well as the position of the Gulf Stream front relative to land during the pre-storm period (i.e., 24-48 h prior to the initial cyclonic intensification) are linked to the observed rate of surface cyclonic deepening for storms that either advected into or initially developed within the Carolina-southeast Virginia offshore coastal zone. It is a major objective of this research to test the potential operational utility of this pre-storm low level baroclinic linkage to subsequent cyclogenesis in an actual National Weather Service (NWS) coastal winter storm forecast setting.The ability to produce coastal surface cyclone intensity forecasts recently became available to North Carolina State University researchers and NWS forecasters. This statistical forecast guidance utilizes regression relationships derived from a nine-season (January 1982-April 1990), 116-storm study conducted previously. During the period between February 1994 and February 1996, the Atlantic Surface Cyclone Intensification Index (ASCII) was successfully implemented in an operational setting by

  1. Phase synchronization of baroclinic waves in a differentially heated rotating annulus experiment subject to periodic forcing with a variable duty cycle.

    Science.gov (United States)

    Read, P L; Morice-Atkinson, X; Allen, E J; Castrejón-Pita, A A

    2017-12-01

    A series of laboratory experiments in a thermally driven, rotating fluid annulus are presented that investigate the onset and characteristics of phase synchronization and frequency entrainment between the intrinsic, chaotic, oscillatory amplitude modulation of travelling baroclinic waves and a periodic modulation of the (axisymmetric) thermal boundary conditions, subject to time-dependent coupling. The time-dependence is in the form of a prescribed duty cycle in which the periodic forcing of the boundary conditions is applied for only a fraction δ of each oscillation. For the rest of the oscillation, the boundary conditions are held fixed. Two profiles of forcing were investigated that capture different parts of the sinusoidal variation and δ was varied over the range 0.1≤δ≤1. Reducing δ was found to act in a similar way to a reduction in a constant coupling coefficient in reducing the width of the interval in forcing frequency or period over which complete synchronization was observed (the "Arnol'd tongue") with respect to the detuning, although for the strongest pulse-like forcing profile some degree of synchronization was discernible even at δ=0.1. Complete phase synchronization was obtained within the Arnol'd tongue itself, although the strength of the amplitude modulation of the baroclinic wave was not significantly affected. These experiments demonstrate a possible mechanism for intraseasonal and/or interannual "teleconnections" within the climate system of the Earth and other planets that does not rely on Rossby wave propagation across the planet along great circles.

  2. The effect of unsteady and baroclinic forcing on predicted wind profiles in Large Eddy Simulations: Two case studies of the daytime atmospheric boundary layer

    DEFF Research Database (Denmark)

    Pedersen, Jesper Grønnegaard; Kelly, Mark C.; Gryning, Sven-Erik

    2013-01-01

    . The applied domain-scale pressure gradient and its height- and time-dependence are estimated from LIDAR measurements of the wind speed above the atmospheric boundary layer in the Høvsøre case, and from radio soundings and a network of ground-based pressure sensors in the Hamburg case. In the two case studies......-scale subsidence and advection, tend to reduce agreement with measurements, relative to the Høvsøre case. The Hamburg case illustrates that measurements of the surface pressure gradient and relatively infrequent radio soundings alone are not sufficient for accurate estimation of a height- and time...

  3. Diurnal tidal currents attributed to free baroclinic coastal-trapped waves on the Pacific shelf off the southeastern coast of Hokkaido, Japan

    Science.gov (United States)

    Kuroda, Hiroshi; Kusaka, Akira; Isoda, Yutaka; Honda, Satoshi; Ito, Sayaka; Onitsuka, Toshihiro

    2018-04-01

    To understand the properties of tides and tidal currents on the Pacific shelf off the southeastern coast of Hokkaido, Japan, we analyzed time series of 9 current meters that were moored on the shelf for 1 month to 2 years. Diurnal tidal currents such as the K1 and O1 constituents were more dominant than semi-diurnal ones by an order of magnitude. The diurnal tidal currents clearly propagated westward along the coast with a typical phase velocity of 2 m s-1 and wavelength of 200 km. Moreover, the shape and phase of the diurnal currents measured by a bottom-mounted ADCP were vertically homogeneous, except in the vicinity of the bottom boundary layer. These features were very consistent with theoretically estimated properties of free baroclinic coastal-trapped waves of the first mode. An annual (semi-annual) variation was apparent for the phase (amplitude) of the O1 tidal current, which was correlated with density stratification (intensity of an along-shelf current called the Coastal Oyashio). These possible causes are discussed in terms of the propagation and generation of coastal-trapped waves.

  4. On sharp vorticity gradients in elongating baroclinic eddies and their stabilization with a solid-body rotation

    Science.gov (United States)

    Sutyrin, Georgi G.

    2016-06-01

    Wide compensated vortices are not able to remain circular in idealized two-layer models unless the ocean depth is assumed to be unrealistically large. Small perturbations on both cyclonic and anticyclonic eddies grow slower if a middle layer with uniform potential vorticity (PV) is added, owing to a weakening of the vertical coupling between the upper and lower layers and a reduction of the PV gradient in the deep layer. Numerical simulations show that the nonlinear development of the most unstable elliptical mode causes self-elongation of the upper vortex core and splitting of the deep PV anomaly into two corotating parts. The emerging tripolar flow pattern in the lower layer results in self-intensification of the fluid rotation in the water column around the vortex center. Further vortex evolution depends on the model parameters and initial conditions, which limits predictability owing to multiple equilibrium attractors existing in the dynamical system. The vortex core strips thin filaments, which roll up into submesoscale vortices to result in substantial mixing at the vortex periphery. Stirring and damping of vorticity by bottom friction are found to be essential for subsequent vortex stabilization. The development of sharp PV gradients leads to nearly solid-body rotation inside the vortex core and formation of transport barriers at the vortex periphery. These processes have important implications for understanding the longevity of real-ocean eddies.

  5. Sensitivity analysis with regard to variations of physical forcing including two possible future hydrographic regimes for the Oeregrundsgrepen. A follow-up baroclinic 3D-model study

    International Nuclear Information System (INIS)

    Engqvist, A.; Andrejev, O.

    2000-02-01

    A sensitivity analysis with regard to variations of physical forcing has been performed using a 3D baroclinic model of the Oeregrundsgrepen area for a whole-year period with data pertaining to 1992. The results of these variations are compared to a nominal run with unaltered physical forcing. This nominal simulation is based on the experience gained in an earlier whole-year modelling of the same area; the difference is mainly that the present nominal simulation is run with identical parameters for the whole year. From a computational economy point of view it has been necessary to vary the time step between the month-long simulation periods. For all simulations with varied forcing, the same time step as for the nominal run has been used. The analysis also comprises the water turnover of a hypsographically defined subsection, the Bio Model area, located above the SFR depository. The external forcing factors that have been varied are the following (with their found relative impact on the volume average of the retention time of the Bio Model area over one year given within parentheses): atmospheric temperature increased/reduced by 2.5 deg C (-0.1% resp. +0.6%), local freshwater discharge rate doubled/halved (-1.6% resp. +0.01%), salinity range at the border increased/reduced a factor 2 (-0.84% resp. 0.00%), wind speed forcing reduced 10% (+8.6%). The results of these simulations, at least the yearly averages, permit a reasonably direct physical explanation, while the detailed dynamics is for natural reasons more intricate. Two additional full-year simulations of possible future hydrographic regimes have also been performed. The first mimics a hypothetical situation with permanent ice cover, which increases the average retention time 87%. The second regime entails the future hypsography with its anticipated shoreline displacement by an 11 m land-rise in the year 4000 AD, which also considerably increases the average retention times for the two remaining layers of the

  6. General predictive model of friction behavior regimes for metal contacts based on the formation stability and evolution of nanocrystalline surface films.

    Energy Technology Data Exchange (ETDEWEB)

    Argibay, Nicolas [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Cheng, Shengfeng [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Sawyer, W. G. [Univ. of Florida, Gainesville, FL (United States); Michael, Joseph R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Chandross, Michael E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-09-01

    The prediction of macro-scale friction and wear behavior based on first principles and material properties has remained an elusive but highly desirable target for tribologists and material scientists alike. Stochastic processes (e.g. wear), statistically described parameters (e.g. surface topography) and their evolution tend to defeat attempts to establish practical general correlations between fundamental nanoscale processes and macro-scale behaviors. We present a model based on microstructural stability and evolution for the prediction of metal friction regimes, founded on recently established microstructural deformation mechanisms of nanocrystalline metals, that relies exclusively on material properties and contact stress models. We show through complementary experimental and simulation results that this model overcomes longstanding practical challenges and successfully makes accurate and consistent predictions of friction transitions for a wide range of contact conditions. This framework not only challenges the assumptions of conventional causal relationships between hardness and friction, and between friction and wear, but also suggests a pathway for the design of higher performance metal alloys.

  7. Chondrocyte deformations as a function of tibiofemoral joint loading predicted by a generalized high-throughput pipeline of multi-scale simulations.

    Directory of Open Access Journals (Sweden)

    Scott C Sibole

    Full Text Available Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method's generalized nature also allows for substitution of any macro-scale

  8. Chondrocyte Deformations as a Function of Tibiofemoral Joint Loading Predicted by a Generalized High-Throughput Pipeline of Multi-Scale Simulations

    Science.gov (United States)

    Sibole, Scott C.; Erdemir, Ahmet

    2012-01-01

    Cells of the musculoskeletal system are known to respond to mechanical loading and chondrocytes within the cartilage are not an exception. However, understanding how joint level loads relate to cell level deformations, e.g. in the cartilage, is not a straightforward task. In this study, a multi-scale analysis pipeline was implemented to post-process the results of a macro-scale finite element (FE) tibiofemoral joint model to provide joint mechanics based displacement boundary conditions to micro-scale cellular FE models of the cartilage, for the purpose of characterizing chondrocyte deformations in relation to tibiofemoral joint loading. It was possible to identify the load distribution within the knee among its tissue structures and ultimately within the cartilage among its extracellular matrix, pericellular environment and resident chondrocytes. Various cellular deformation metrics (aspect ratio change, volumetric strain, cellular effective strain and maximum shear strain) were calculated. To illustrate further utility of this multi-scale modeling pipeline, two micro-scale cartilage constructs were considered: an idealized single cell at the centroid of a 100×100×100 μm block commonly used in past research studies, and an anatomically based (11 cell model of the same volume) representation of the middle zone of tibiofemoral cartilage. In both cases, chondrocytes experienced amplified deformations compared to those at the macro-scale, predicted by simulating one body weight compressive loading on the tibiofemoral joint. In the 11 cell case, all cells experienced less deformation than the single cell case, and also exhibited a larger variance in deformation compared to other cells residing in the same block. The coupling method proved to be highly scalable due to micro-scale model independence that allowed for exploitation of distributed memory computing architecture. The method’s generalized nature also allows for substitution of any macro-scale and/or micro

  9. Probabilistic Fatigue Life Prediction of Bridge Cables Based on Multiscaling and Mesoscopic Fracture Mechanics

    Directory of Open Access Journals (Sweden)

    Zhongxiang Liu

    2016-04-01

    Full Text Available Fatigue fracture of bridge stay-cables is usually a multiscale process as the crack grows from micro-scale to macro-scale. Such a process, however, is highly uncertain. In order to make a rational prediction of the residual life of bridge cables, a probabilistic fatigue approach is proposed, based on a comprehensive vehicle load model, finite element analysis and multiscaling and mesoscopic fracture mechanics. Uncertainties in both material properties and external loads are considered. The proposed method is demonstrated through the fatigue life prediction of cables of the Runyang Cable-Stayed Bridge in China, and it is found that cables along the bridge spans may have significantly different fatigue lives, and due to the variability, some of them may have shorter lives than those as expected from the design.

  10. 一次冬季锋面暴风雪天气过程的斜压边界层特征的观测分析%Observational Analyses of Baroclinic Boundary Layer Characteristics during One Frontal Winter Snowstorm

    Institute of Scientific and Technical Information of China (English)

    许吟隆; 钱粉兰; 陈陟; 李诗明; 周明煜

    2002-01-01

    The evolution and characteristics of the baroclinic boundary layer for one frontal winter snowstorm were analyzed by using the well-documented dataset during Intensive Observation Period (IOP) 17 of STORM-FEST. It is found that when the warm moist air was lifted across the front, a great amount of la tent heat release because of snowing increased the frontal temperature contrast to intensify frontogenesis. Itis shown in the zig-zag section diagram of potential temperature that when the frontogenesis got stronger, a cold trough was formed and both low-level jet (LLJ) and upper-level jet (ULJ) emerged ahead of the front.In the strongest stage of frontogenesis, the frontal contrast of potential temperature of cold trough reached as high as 20 K. Hereafter the LLJ ahead of the front tended to weaken and the LLJ behind the front tended to strengthen. The frontal circulation system was dominated by the cold air advection behind the front,which transported the cold air behind the front forward to the warm area ahead of the front to weaken the cold trough and finally frontolysis occurred. It is shown by the analyses of turbulent characteristics of front al baroclinic boundary-layer that the vertical shear (wv) above the boundary layer was very large, and the pumping of the strong wind shear in turbulent energy budget made the characteristic variables within the PBL well mixed. Sufficient moisture carried by southerly flow from the Mexico Gulf, and the strong baroclinity of the frontal boundary layer played key roles in this frontal winter snowstorm, and the large-scale ULJ behind the cold front is also advantageous to the development of the convective boundary layer.%对STORM-FEST IOP 1 7一次冬季锋面暴风雪天气过程的斜压边界层结构演变及特征进行了分析.发现:暖湿空气沿锋面抬升凝结成云,产生降水过程中释放的大量潜热显著增加锋两侧的水平温度差异,产生锋生.与锋生相伴,在锋前产生低空急流和高

  11. A Cellular Automaton / Finite Element model for predicting grain texture development in galvanized coatings

    Science.gov (United States)

    Guillemot, G.; Avettand-Fènoël, M.-N.; Iosta, A.; Foct, J.

    2011-01-01

    Hot-dipping galvanizing process is a widely used and efficient way to protect steel from corrosion. We propose to master the microstructure of zinc grains by investigating the relevant process parameters. In order to improve the texture of this coating, we model grain nucleation and growth processes and simulate the zinc solid phase development. A coupling scheme model has been applied with this aim. This model improves a previous two-dimensional model of the solidification process. It couples a cellular automaton (CA) approach and a finite element (FE) method. CA grid and FE mesh are superimposed on the same domain. The grain development is simulated at the micro-scale based on the CA grid. A nucleation law is defined using a Gaussian probability and a random set of nucleating cells. A crystallographic orientation is defined for each one with a choice of Euler's angle (Ψ,θ,φ). A small growing shape is then associated to each cell in the mushy domain and a dendrite tip kinetics is defined using the model of Kurz [2]. The six directions of basal plane and the two perpendicular directions develop in each mushy cell. During each time step, cell temperature and solid fraction are then determined at micro-scale using the enthalpy conservation relation and variations are reassigned at macro-scale. This coupling scheme model enables to simulate the three-dimensional growing kinetics of the zinc grain in a two-dimensional approach. Grain structure evolutions for various cooling times have been simulated. Final grain structure has been compared to EBSD measurements. We show that the preferentially growth of dendrite arms in the basal plane of zinc grains is correctly predicted. The described coupling scheme model could be applied for simulated other product or manufacturing processes. It constitutes an approach gathering both micro and macro scale models.

  12. Predicting the constitutive behavior of semi-solids via a direct finite element simulation: application to AA5182

    Science.gov (United States)

    Phillion, A. B.; Cockcroft, S. L.; Lee, P. D.

    2009-07-01

    The methodology of direct finite element (FE) simulation was used to predict the semi-solid constitutive behavior of an industrially important aluminum-magnesium alloy, AA5182. Model microstructures were generated that detail key features of the as-cast semi-solid: equiaxed-globular grains of random size and shape, interconnected liquid films, and pores at the triple-junctions. Based on the results of over fifty different simulations, a model-based constitutive relationship which includes the effects of the key microstructure features—fraction solid, grain size and fraction porosity—was derived using regression analysis. This novel constitutive equation was then validated via comparison with both the FE simulations and experimental stress/strain data. Such an equation can now be used to incorporate the effects of microstructure on the bulk semi-solid flow stress within a macro- scale process model.

  13. Predicting the constitutive behavior of semi-solids via a direct finite element simulation: application to AA5182

    International Nuclear Information System (INIS)

    Phillion, A B; Cockcroft, S L; Lee, P D

    2009-01-01

    The methodology of direct finite element (FE) simulation was used to predict the semi-solid constitutive behavior of an industrially important aluminum-magnesium alloy, AA5182. Model microstructures were generated that detail key features of the as-cast semi-solid: equiaxed-globular grains of random size and shape, interconnected liquid films, and pores at the triple-junctions. Based on the results of over fifty different simulations, a model-based constitutive relationship which includes the effects of the key microstructure features—fraction solid, grain size and fraction porosity—was derived using regression analysis. This novel constitutive equation was then validated via comparison with both the FE simulations and experimental stress/strain data. Such an equation can now be used to incorporate the effects of microstructure on the bulk semi-solid flow stress within a macro- scale process model

  14. Aggregation of evaporative fraction by remote sensing from micro to macro scale

    NARCIS (Netherlands)

    Bastiaanssen, W.G.M.; Pelgrum, H.; Wal, van der T.; Roebeling, R.A.

    1996-01-01

    The evaporative fraction of the surface energy balance has been favoured as a tool to describe the energy partitioning during daytime. It is shown that the evaporative fraction behaves temporally stable under heterogeneous terrain conditions in the Echival Field Experiment in

  15. Deriving micro- to macro-scale seismic velocities from ice-core c axis orientations

    Science.gov (United States)

    Kerch, Johanna; Diez, Anja; Weikusat, Ilka; Eisen, Olaf

    2018-05-01

    One of the great challenges in glaciology is the ability to estimate the bulk ice anisotropy in ice sheets and glaciers, which is needed to improve our understanding of ice-sheet dynamics. We investigate the effect of crystal anisotropy on seismic velocities in glacier ice and revisit the framework which is based on fabric eigenvalues to derive approximate seismic velocities by exploiting the assumed symmetry. In contrast to previous studies, we calculate the seismic velocities using the exact c axis angles describing the orientations of the crystal ensemble in an ice-core sample. We apply this approach to fabric data sets from an alpine and a polar ice core. Our results provide a quantitative evaluation of the earlier approximative eigenvalue framework. For near-vertical incidence our results differ by up to 135 m s-1 for P-wave and 200 m s-1 for S-wave velocity compared to the earlier framework (estimated 1 % difference in average P-wave velocity at the bedrock for the short alpine ice core). We quantify the influence of shear-wave splitting at the bedrock as 45 m s-1 for the alpine ice core and 59 m s-1 for the polar ice core. At non-vertical incidence we obtain differences of up to 185 m s-1 for P-wave and 280 m s-1 for S-wave velocities. Additionally, our findings highlight the variation in seismic velocity at non-vertical incidence as a function of the horizontal azimuth of the seismic plane, which can be significant for non-symmetric orientation distributions and results in a strong azimuth-dependent shear-wave splitting of max. 281 m s-1 at some depths. For a given incidence angle and depth we estimated changes in phase velocity of almost 200 m s-1 for P wave and more than 200 m s-1 for S wave and shear-wave splitting under a rotating seismic plane. We assess for the first time the change in seismic anisotropy that can be expected on a short spatial (vertical) scale in a glacier due to strong variability in crystal-orientation fabric (±50 m s-1 per 10 cm). Our investigation of seismic anisotropy based on ice-core data contributes to advancing the interpretation of seismic data, with respect to extracting bulk information about crystal anisotropy, without having to drill an ice core and with special regard to future applications employing ultrasonic sounding.

  16. Macro-scale complexity of nano- to micro-scale architecture of ...

    Indian Academy of Sciences (India)

    mobile, due to the lack of correlation between the silicon oxide layer and the final olivine particles, leading ... (olivine) systems. .... A branched forsterite crystal system (scale bar = .... therefore, that no template mechanism is operating between.

  17. Micro- and macro-scale self-organization in a dissipative plasma

    International Nuclear Information System (INIS)

    Skoric, M.M.; Sato, T.; Maluckov, A.; Jovanovic, M.S.

    1998-10-01

    We study a nonlinear three-wave interaction in an open dissipative model of stimulated Raman backscattering in a plasma. A hybrid kinetic-fluid scheme is proposed to include anomalous kinetic dissipation due to electron trapping and plasma wave breaking. We simulate a finite plasma with open boundaries and vary a transport parameter to examine a route to spatio-temporal complexity. An interplay between self-organization at micro (kinetic) and macro (wave/fluid) scales is revealed through quasi-periodic and intermittent evolution of dynamical variables, dissipative structures and related entropy rates. An evidence that entropy rate extrema correspond to structural transitions is found. (author)

  18. Micro- and macro-scale petrophysical characterization of potential reservoir units from the Northern Israel

    Science.gov (United States)

    Haruzi, Peleg; Halisch, Matthias; Katsman, Regina; Waldmann, Nicolas

    2016-04-01

    Lower Cretaceous sandstone serves as hydrocarbon reservoir in some places over the world, and potentially in Hatira formation in the Golan Heights, northern Israel. The purpose of the current research is to characterize the petrophysical properties of these sandstone units. The study is carried out by two alternative methods: using conventional macroscopic lab measurements, and using CT-scanning, image processing and subsequent fluid mechanics simulations at a microscale, followed by upscaling to the conventional macroscopic rock parameters (porosity and permeability). Comparison between the upscaled and measured in the lab properties will be conducted. The best way to upscale the microscopic rock characteristics will be analyzed based the models suggested in the literature. Proper characterization of the potential reservoir will provide necessary analytical parameters for the future experimenting and modeling of the macroscopic fluid flow behavior in the Lower Cretaceous sandstone.

  19. A global fingerprint of macro-scale changes in urban structure from 1999 to 2009

    International Nuclear Information System (INIS)

    Frolking, Steve; Milliman, Tom; Seto, Karen C; Friedl, Mark A

    2013-01-01

    Urban population now exceeds rural population globally, and 60–80% of global energy consumption by households, businesses, transportation, and industry occurs in urban areas. There is growing evidence that built-up infrastructure contributes to carbon emissions inertia, and that investments in infrastructure today have delayed climate cost in the future. Although the United Nations statistics include data on urban population by country and select urban agglomerations, there are no empirical data on built-up infrastructure for a large sample of cities. Here we present the first study to examine changes in the structure of the world’s largest cities from 1999 to 2009. Combining data from two space-borne sensors—backscatter power (PR) from NASA’s SeaWinds microwave scatterometer, and nighttime lights (NL) from NOAA’s defense meteorological satellite program/operational linescan system (DMSP/OLS)—we report large increases in built-up infrastructure stock worldwide and show that cities are expanding both outward and upward. Our results reveal previously undocumented recent and rapid changes in urban areas worldwide that reflect pronounced shifts in the form and structure of cities. Increases in built-up infrastructure are highest in East Asian cities, with Chinese cities rapidly expanding their material infrastructure stock in both height and extent. In contrast, Indian cities are primarily building out and not increasing in verticality. This new dataset will help characterize the structure and form of cities, and ultimately improve our understanding of how cities affect regional-to-global energy use and greenhouse gas emissions. (letter)

  20. The Jack mackerel Trachurus murphyiand the environmental macro-scale variables

    Directory of Open Access Journals (Sweden)

    Marco Espino

    2013-10-01

    Full Text Available This paper analyses information on various macro environmental variables available since 1876 for the Southeast Pacific and more recent data on Jack mackerel Trachurus murphyi (Nichols, 1920 landings and biomass in the Peruvian sea, relating them to probable areas of water masses equivalent to Cold Coastal Waters (CCW and Subtropical Surface Waters (SSW. It is concluded that the index of the Pacific Decadal Oscillation (PDO presents expressions of variability that are consistent with those found for the Southern Oscillation Index (SOI and that the detected changes in biomass of Jack mackerel T. murphyiin the Peruvian sea reflect changes in the availability of the fish stock associated with secular (SOI and decadal (PDO variability patterns. These fluctuations in stock availability impact fisheries in Ecuador, Peru and northern Chile, which show significant variations in their landings and would have given a biased picture of the state of abundance, leading to wrong diagnoses of the real situation of the exploited stocks. These patterns of variability would also affect the appearance of El Niño, making them start in the southern hemisphere autumn or spring depending on whether the current PDO is positive or negative. Periods of high (1876 – 1925 and 1976 – 2012 and low (1926 – 1975 variability are also identified in relation to the Euclidean distance of the variances of the SOI; and in relation to the PDO a distinction is made between warm (1925 – 1944 and 1975 – 1994, cold (1945 – 1974 and tempered or interface periods (1895 – 1924 and 1995 – 2012, the latter being explained by the interaction between periods of high variability.

  1. Microbial biofilm detection on food contact surfaces by macro-scale fluorescence imaging

    Science.gov (United States)

    Hyperspectral fluorescence imaging methods were utilized to evaluate the potential of multispectral fluorescence methods for detection of pathogenic biofilm formations on four types of food contact surface materials: stainless steel, high density polyethylene (HDPE) commonly used for cutting boards,...

  2. Characterizing pesticide sorption and degradation in macro scale biopurification systems using column displacement experiments

    International Nuclear Information System (INIS)

    Wilde, Tineke de; Spanoghe, Pieter; Mertens, Jan; Sniegowksi, Kristel; Ryckeboer, Jaak; Jaeken, Peter; Springael, Dirk

    2009-01-01

    The efficiency of biopurification systems to treat pesticide-contaminated water was previously studied in microcosms. To validate the obtained results, macrocosm systems were set-up. Four pesticides (linuron, isoproturon, bentazone, and metalaxyl) were continuously applied to ten different organic substrate mixes. Retention of the pesticides was similar and in some cases slightly lower in the macrocosms compared to the microcosms. Differences in retention between the different mixes were however minimal. Moreover, the classification of the retention strength of the pesticides was identical to that observed in microcosms: linuron > isoproturon > metalaxyl > bentazone. Monod kinetics were used to describe delayed degradation, which occurred for isoproturon, metalaxyl and bentazone. No breakthrough of linuron was observed, thus, this pesticide was appointed as the most retained and/or degraded pesticide, followed by isoproturon, metalaxyl and bentazone. Finally, most of the matrix mixes efficient in degrading or retaining pesticides were mixes containing dried cow manure. - Transport of pesticides in macrocosm containing organic substrates

  3. Characterizing Micro- and Macro-Scale Seismicity from Bayou Corne, Louisiana

    Science.gov (United States)

    Baig, A. M.; Urbancic, T.; Karimi, S.

    2013-12-01

    The initiation of felt seismicity in Bayou Corne, Louisiana, coupled with other phenomena detected by residents on the nearby housing development, prompted a call to install a broadband seismic network to monitor subsurface deformation. The initial deployment was in place to characterize the deformation contemporaneous with the formation of a sinkhole located in close proximity to a salt dome. Seismic events generated during this period followed a swarm-like behaviour with moment magnitudes culminating around Mw2.5. However, the seismic data recorded during this sequence suffer from poor signal to noise, onsets that are very difficult to pick, and the presence of a significant amount of energy arriving later in the waveforms. Efforts to understand the complexity in these waveforms are ongoing, and involve invoking the complexities inherent in recording in a highly attenuating swamp overlying a complex three-dimensional structure with the strong material property contrast of the salt dome. In order to understand the event character, as well as to locally lower the completeness threshold of the sequence, a downhole array of 15 Hz sensors was deployed in a newly drilled well around the salt dome. Although the deployment lasted a little over a month in duration, over 1000 events were detected down to moment magnitude -Mw3. Waveform quality tended to be excellent, with very distinct P and S wave arrivals observable across the array for most events. The highest magnitude events were seen as well on the surface network and allowed for the opportunity to observe the complexities introduced by the site effects, while overcoming the saturation effects on the higher-frequency downhole geophones. This hybrid downhole and surface array illustrates how a full picture of subsurface deformation is only made possible by combining the high-frequency downhole instrumentation to see the microseismicity complemented with a broadband array to accurately characterize the source parameters for the larger magnitude events. Our presentation is focused on investigating this deformation, characterizing the scaling behaviour and the other source processes by taking advantage of the wide-band afforded to us through the deployment.

  4. Characterizing pesticide sorption and degradation in macro scale biopurification systems using column displacement experiments

    Energy Technology Data Exchange (ETDEWEB)

    Wilde, Tineke de [Laboratory of Crop Protection Chemistry, Faculty of Bioscience Engineering, Ghent University, Coupure Links 653, B-9000 Ghent (Belgium)], E-mail: Tineke.DeWilde@UGent.be; Spanoghe, Pieter [Laboratory of Crop Protection Chemistry, Faculty of Bioscience Engineering, Ghent University, Coupure Links 653, B-9000 Ghent (Belgium); Mertens, Jan; Sniegowksi, Kristel; Ryckeboer, Jaak [Division of Soil and Water Management, Faculty of Bioscience Engineering, K.U. Leuven, Kasteelpark Arenberg 20, B-3001 Heverlee (Belgium); Jaeken, Peter [PCF-Royal Research Station of Gorsem, De Brede Akker 13, 3800 Sint-Truiden (Belgium); Springael, Dirk [Division of Soil and Water Management, Faculty of Bioscience Engineering, K.U. Leuven, Kasteelpark Arenberg 20, B-3001 Heverlee (Belgium)

    2009-04-15

    The efficiency of biopurification systems to treat pesticide-contaminated water was previously studied in microcosms. To validate the obtained results, macrocosm systems were set-up. Four pesticides (linuron, isoproturon, bentazone, and metalaxyl) were continuously applied to ten different organic substrate mixes. Retention of the pesticides was similar and in some cases slightly lower in the macrocosms compared to the microcosms. Differences in retention between the different mixes were however minimal. Moreover, the classification of the retention strength of the pesticides was identical to that observed in microcosms: linuron > isoproturon > metalaxyl > bentazone. Monod kinetics were used to describe delayed degradation, which occurred for isoproturon, metalaxyl and bentazone. No breakthrough of linuron was observed, thus, this pesticide was appointed as the most retained and/or degraded pesticide, followed by isoproturon, metalaxyl and bentazone. Finally, most of the matrix mixes efficient in degrading or retaining pesticides were mixes containing dried cow manure. - Transport of pesticides in macrocosm containing organic substrates.

  5. The divining root: moisture-driven responses of roots at the micro- and macro-scale.

    Science.gov (United States)

    Robbins, Neil E; Dinneny, José R

    2015-04-01

    Water is fundamental to plant life, but the mechanisms by which plant roots sense and respond to variations in water availability in the soil are poorly understood. Many studies of responses to water deficit have focused on large-scale effects of this stress, but have overlooked responses at the sub-organ or cellular level that give rise to emergent whole-plant phenotypes. We have recently discovered hydropatterning, an adaptive environmental response in which roots position new lateral branches according to the spatial distribution of available water across the circumferential axis. This discovery illustrates that roots are capable of sensing and responding to water availability at spatial scales far lower than those normally studied for such processes. This review will explore how roots respond to water availability with an emphasis on what is currently known at different spatial scales. Beginning at the micro-scale, there is a discussion of water physiology at the cellular level and proposed sensory mechanisms cells use to detect osmotic status. The implications of these principles are then explored in the context of cell and organ growth under non-stress and water-deficit conditions. Following this, several adaptive responses employed by roots to tailor their functionality to the local moisture environment are discussed, including patterning of lateral root development and generation of hydraulic barriers to limit water loss. We speculate that these micro-scale responses are necessary for optimal functionality of the root system in a heterogeneous moisture environment, allowing for efficient water uptake with minimal water loss during periods of drought. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  6. Advances in multiscale modeling of materials behavior: from nano to macro scales

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2004-01-01

    Full text.The development of micromechanical devices, thin films, nano layered structures and nano composite coating materials, such as those used in microelectronics, transportation, medical diagnostics and implant industries, requires the utilization of materials that possess a high degree of material reliability, structural stability, mechanical strength, high ductility, toughness and resistance to fracture and fatigue. To achieve these properties many of these devices can be constructed from micro/nano structured materials, which often exhibit enhanced mechanical strength and ductility when compared to conventional materials. However, although the promise of such materials has been demonstrated in laboratories, it has not made inroads into commercial manufacturing in the area of structural materials. A primary impediment to bringing these technologies to the market is the inability to scale up from small scale laboratory experiments to manufacturing methods. Our work at WSU has been to develop theories and computational tools, verified by experiments, which are required to understand and design micro and nano structured materials for various structural applications. The results of this work have a major impact on this emerging industry and are being used in many national and international research institutes

  7. Evaluating decadal predictions of northern hemispheric cyclone frequencies

    Directory of Open Access Journals (Sweden)

    Tim Kruschke

    2014-04-01

    Full Text Available Mid-latitudinal cyclones are a key factor for understanding regional anomalies in primary meteorological parameters such as temperature or precipitation. Extreme cyclones can produce notable impacts on human society and economy, for example, by causing enormous economic losses through wind damage. Based on 41 annually initialised (1961–2001 hindcast ensembles, this study evaluates the ability of a single-model decadal forecast system (MPI-ESM-LR to provide skilful probabilistic three-category forecasts (enhanced, normal or decreased of winter (ONDJFM extra-tropical cyclone frequency over the Northern Hemisphere with lead times from 1 yr up to a decade. It is shown that these predictions exhibit some significant skill, mainly for lead times of 2–5 yr, especially over the North Atlantic and Pacific. Skill for intense cyclones is generally higher than for all detected systems. A comparison of decadal hindcasts from two different initialisation techniques indicates that initialising from reanalysis fields yields slightly better results for the first forecast winter (month 10–15, while initialisation based on an assimilation experiment provides better skill for lead times between 2 and 5 yr. The reasons and mechanisms behind this predictive skill are subject to future work. Preliminary analyses suggest a strong relationship of the model's skill over the North Atlantic with the ability to predict upper ocean temperatures modulating lower troposphere baroclinicity for the respective area and time scales.

  8. Whole-brain functional connectivity predicted by indirect structural connections

    DEFF Research Database (Denmark)

    Røge, Rasmus; Ambrosen, Karen Marie Sandø; Albers, Kristoffer Jon

    2017-01-01

    Modern functional and diffusion magnetic resonance imaging (fMRI and dMRI) provide data from which macro-scale networks of functional and structural whole brain connectivity can be estimated. Although networks derived from these two modalities describe different properties of the human brain, the...

  9. Impact of satellite data assimilation on the predictability of monsoon intraseasonal oscillations in a regional model

    KAUST Repository

    Parekh, Anant

    2017-04-07

    This study reports the improvement in the predictability of circulation and precipitation associated with monsoon intraseasonal oscillations (MISO) when the initial state is produced by assimilating Atmospheric Infrared Sounder (AIRS) retrieved temperature and water vapour profiles in Weather Research Forecast (WRF) model. Two separate simulations are carried out for nine years (2003 to 2011) . In the first simulation, forcing is from National Centers for Environmental Prediction (NCEP, CTRL) and in the second, apart from NCEP forcing, AIRS temperature and moisture profiles are assimilated (ASSIM). Ten active and break cases are identified from each simulation. Three dimensional temperature states of identified active and break cases are perturbed using twin perturbation method and carried out predictability tests. Analysis reveals that the limit of predictability of low level zonal wind is improved by four (three) days during active (break) phase. Similarly the predictability of upper level zonal wind (precipitation) is enhanced by four (two) and two (four) days respectively during active and break phases. This suggests that the initial state using AIRS observations could enhance predictability limit of MISOs in WRF. More realistic baroclinic response and better representation of vertical state of atmosphere associated with monsoon enhance the predictability of circulation and rainfall.

  10. Multi-level machine learning prediction of protein–protein interactions in Saccharomyces cerevisiae

    Directory of Open Access Journals (Sweden)

    Julian Zubek

    2015-07-01

    Full Text Available Accurate identification of protein–protein interactions (PPI is the key step in understanding proteins’ biological functions, which are typically context-dependent. Many existing PPI predictors rely on aggregated features from protein sequences, however only a few methods exploit local information about specific residue contacts. In this work we present a two-stage machine learning approach for prediction of protein–protein interactions. We start with the carefully filtered data on protein complexes available for Saccharomyces cerevisiae in the Protein Data Bank (PDB database. First, we build linear descriptions of interacting and non-interacting sequence segment pairs based on their inter-residue distances. Secondly, we train machine learning classifiers to predict binary segment interactions for any two short sequence fragments. The final prediction of the protein–protein interaction is done using the 2D matrix representation of all-against-all possible interacting sequence segments of both analysed proteins. The level-I predictor achieves 0.88 AUC for micro-scale, i.e., residue-level prediction. The level-II predictor improves the results further by a more complex learning paradigm. We perform 30-fold macro-scale, i.e., protein-level cross-validation experiment. The level-II predictor using PSIPRED-predicted secondary structure reaches 0.70 precision, 0.68 recall, and 0.70 AUC, whereas other popular methods provide results below 0.6 threshold (recall, precision, AUC. Our results demonstrate that multi-scale sequence features aggregation procedure is able to improve the machine learning results by more than 10% as compared to other sequence representations. Prepared datasets and source code for our experimental pipeline are freely available for download from: http://zubekj.github.io/mlppi/ (open source Python implementation, OS independent.

  11. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    Science.gov (United States)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with

  12. WALS Prediction

    NARCIS (Netherlands)

    Magnus, J.R.; Wang, W.; Zhang, Xinyu

    2012-01-01

    Abstract: Prediction under model uncertainty is an important and difficult issue. Traditional prediction methods (such as pretesting) are based on model selection followed by prediction in the selected model, but the reported prediction and the reported prediction variance ignore the uncertainty

  13. Prediction of North Pacific Height Anomalies During Strong Madden-Julian Oscillation Events

    Science.gov (United States)

    Kai-Chih, T.; Barnes, E. A.; Maloney, E. D.

    2017-12-01

    The Madden Julian Oscillation (MJO) creates strong variations in extratropical atmospheric circulations that have important implications for subseasonal-to-seasonal prediction. In particular, certain MJO phases are characterized by a consistent modulation of geopotential height in the North Pacific and adjacent regions across different MJO events. Until recently, only limited research has examined the relationship between these robust MJO tropical-extratropical teleconnections and model prediction skill. In this study, reanalysis data (MERRA and ERA-Interim) and ECMWF ensemble hindcasts are used to demonstrate that robust teleconnections in specific MJO phases and time lags are also characterized by excellent agreement in the prediction of geopotential height anoma- lies across model ensemble members at forecast leads of up to 3 weeks. These periods of enhanced prediction capabilities extend the possibility for skillful extratropical weather prediction beyond traditional 10-13 day limits. Furthermore, we also examine the phase dependency of teleconnection robustness by using Linear Baroclinic Model (LBM) and the result is consistent with the ensemble hindcasts : the anomalous heating of MJO phase 2 (phase 6) can consistently generate positive (negative) geopotential height anomalies around the extratropical Pacific with a lead of 15-20 days, while other phases are more sensitive to the variaion of the mean state.

  14. Using synchronization in multi-model ensembles to improve prediction

    Science.gov (United States)

    Hiemstra, P.; Selten, F.

    2012-04-01

    In recent decades, many climate models have been developed to understand and predict the behavior of the Earth's climate system. Although these models are all based on the same basic physical principles, they still show different behavior. This is for example caused by the choice of how to parametrize sub-grid scale processes. One method to combine these imperfect models, is to run a multi-model ensemble. The models are given identical initial conditions and are integrated forward in time. A multi-model estimate can for example be a weighted mean of the ensemble members. We propose to go a step further, and try to obtain synchronization between the imperfect models by connecting the multi-model ensemble, and exchanging information. The combined multi-model ensemble is also known as a supermodel. The supermodel has learned from observations how to optimally exchange information between the ensemble members. In this study we focused on the density and formulation of the onnections within the supermodel. The main question was whether we could obtain syn-chronization between two climate models when connecting only a subset of their state spaces. Limiting the connected subspace has two advantages: 1) it limits the transfer of data (bytes) between the ensemble, which can be a limiting factor in large scale climate models, and 2) learning the optimal connection strategy from observations is easier. To answer the research question, we connected two identical quasi-geostrohic (QG) atmospheric models to each other, where the model have different initial conditions. The QG model is a qualitatively realistic simulation of the winter flow on the Northern hemisphere, has three layers and uses a spectral imple-mentation. We connected the models in the original spherical harmonical state space, and in linear combinations of these spherical harmonics, i.e. Empirical Orthogonal Functions (EOFs). We show that when connecting through spherical harmonics, we only need to connect 28% of

  15. From nano- to macro-scale: nanotechnology approaches for spatially controlled delivery of bioactive factors for bone and cartilage engineering.

    Science.gov (United States)

    Santo, Vítor E; Gomes, Manuela E; Mano, João F; Reis, Rui L

    2012-07-01

    The field of biomaterials has advanced towards the molecular and nanoscale design of bioactive systems for tissue engineering, regenerative medicine and drug delivery. Spatial cues are displayed in the 3D extracellular matrix and can include signaling gradients, such as those observed during chemotaxis. Architectures range from the nanometer to the centimeter length scales as exemplified by extracellular matrix fibers, cells and macroscopic shapes. The main focus of this review is the application of a biomimetic approach by the combination of architectural cues, obtained through the application of micro- and nanofabrication techniques, with the ability to sequester and release growth factors and other bioactive agents in a spatiotemporal controlled manner for bone and cartilage engineering.

  16. Exploring the macro-scale CO_2 mitigation potential of photovoltaics and wind energy in Europe's energy transition

    International Nuclear Information System (INIS)

    Usubiaga, Arkaitz; Acosta-Fernández, José; McDowall, Will; Li, Francis G.N.

    2017-01-01

    Replacing traditional technologies by renewables can lead to an increase of emissions during early diffusion stages if the emissions avoided during the use phase are exceeded by those associated with the deployment of new units. Based on historical developments and on counterfactual scenarios in which we assume that selected renewable technologies did not diffuse, we conclude that onshore and offshore wind energy have had a positive contribution to climate change mitigation since the beginning of their diffusion in EU27. In contrast, photovoltaic panels did not pay off from an environmental standpoint until very recently, since the benefits expected at the individual plant level were offset until 2013 by the CO_2 emissions related to the construction and deployment of the next generation of panels. Considering the varied energy mixes and penetration rates of renewable energies in different areas, several countries can experience similar time gaps between the installation of the first renewable power plants and the moment in which the emissions from their infrastructure are offset. The analysis demonstrates that the time-profile of renewable energy emissions can be relevant for target-setting and detailed policy design, particularly when renewable energy strategies are pursued in concert with carbon pricing through cap-and-trade systems. - Highlights: • There is a time gap between the deployment of renewables and net CO2 mitigation. • Offshore wind energy contributes to net emission reductions in the EU27 since 2004. • PV panels contribute to net emission reductions in the EU27 since 2013. • The time-profile of renewable energy emissions is not usually considered in policy-design. • But it is important when renewable energy strategies are combined with carbon pricing.

  17. Structural Foaming at the Nano-, Micro-, and Macro-Scales of Continuous Carbon Fiber Reinforced Polymer Matrix Composites

    Science.gov (United States)

    2012-10-29

    structural porosity at MNM scales could be introduced into the matrix, the carbon fiber reinforcement, and during prepreg lamination processing, without...areas, including fibers. Furthermore, investigate prepreg thickness and resin content effects on the thermomechanical performance of laminated ...Accomplishment 4) 5 Develop constitutive models for nano- foamed and micro- foamed PMC systems from single ply prepreg to multilayer laminated

  18. Climate prediction and predictability

    Science.gov (United States)

    Allen, Myles

    2010-05-01

    Climate prediction is generally accepted to be one of the grand challenges of the Geophysical Sciences. What is less widely acknowledged is that fundamental issues have yet to be resolved concerning the nature of the challenge, even after decades of research in this area. How do we verify or falsify a probabilistic forecast of a singular event such as anthropogenic warming over the 21st century? How do we determine the information content of a climate forecast? What does it mean for a modelling system to be "good enough" to forecast a particular variable? How will we know when models and forecasting systems are "good enough" to provide detailed forecasts of weather at specific locations or, for example, the risks associated with global geo-engineering schemes. This talk will provide an overview of these questions in the light of recent developments in multi-decade climate forecasting, drawing on concepts from information theory, machine learning and statistics. I will draw extensively but not exclusively from the experience of the climateprediction.net project, running multiple versions of climate models on personal computers.

  19. Micromechanics model for predicting anisotropic electrical conductivity of carbon fiber composite materials

    Science.gov (United States)

    Haider, Mohammad Faisal; Haider, Md. Mushfique; Yasmeen, Farzana

    2016-07-01

    Heterogeneous materials, such as composites consist of clearly distinguishable constituents (or phases) that show different electrical properties. Multifunctional composites have anisotropic electrical properties that can be tailored for a particular application. The effective anisotropic electrical conductivity of composites is strongly affected by many parameters including volume fractions, distributions, and orientations of constituents. Given the electrical properties of the constituents, one important goal of micromechanics of materials consists of predicting electrical response of the heterogeneous material on the basis of the geometries and properties of the individual phases, a task known as homogenization. The benefit of homogenization is that the behavior of a heterogeneous material can be determined without resorting or testing it. Furthermore, continuum micromechanics can predict the full multi-axial properties and responses of inhomogeneous materials, which are anisotropic in nature. Effective electrical conductivity estimation is performed by using classical micromechanics techniques (composite cylinder assemblage method) that investigates the effect of the fiber/matrix electrical properties and their volume fractions on the micro scale composite response. The composite cylinder assemblage method (CCM) is an analytical theory that is based on the assumption that composites are in a state of periodic structure. The CCM was developed to extend capabilities variable fiber shape/array availability with same volume fraction, interphase analysis, etc. The CCM is a continuum-based micromechanics model that provides closed form expressions for upper level length scales such as macro-scale composite responses in terms of the properties, shapes, orientations and constituent distributions at lower length levels such as the micro-scale.

  20. For how long can we predict the weather? - Insights into atmospheric predictability from global convection-allowing simulations

    Science.gov (United States)

    Judt, Falko

    2017-04-01

    A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex

  1. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  2. Predictive medicine

    NARCIS (Netherlands)

    Boenink, Marianne; ten Have, Henk

    2015-01-01

    In the last part of the twentieth century, predictive medicine has gained currency as an important ideal in biomedical research and health care. Research in the genetic and molecular basis of disease suggested that the insights gained might be used to develop tests that predict the future health

  3. Validation of software components for the prediction of irradiation-induced damage of RPV steel

    International Nuclear Information System (INIS)

    Bergner, Frank; Birkenheuer, Uwe; Ulbricht, Andreas

    2010-04-01

    The modelling of irradiation-induced damage of RPV steels from primary cascades up to the change of mechanical properties bridging length scales from the atomic level up to the macro-scale and time scales up to years contributes essentially to an improved understanding of the phenomenon of neutron embrittlement. In future modelling may become a constituent of the procedure to evaluate RPV safety. The selected two-step approach is based upon the coupling of a rate-theory module aimed at simulating the evolution of the size distribution of defect-solute clusters with a hardening module aimed at predicting the yield stress increase. The scope of the investigation consists in the development and validation of corresponding numerical tools. In order to validate these tools, the output of representative simulations is compared with results from small-angle neutron scattering experiments and tensile tests performed for neutron-irradiated RPV steels. Using the developed rate-theory module it is possible to simulate the evolution of size, concentration and composition of mixed Cu-vacancy clusters over the relevant ranges of size up to 10.000 atoms and time up to tens of years. The connection between the rate-theory model and hardening is based upon both the mean spacing and the strength of obstacles for dislocation glide. As a result of the validation procedure of the numerical tools, we have found that essential trends of the irradiation-induced yield stress increase of Cu-bearing and low-Cu RPV steels are displayed correctly. First ideas on how to take into account the effect of Ni on both cluster evolution and hardening are worked out.

  4. A data-driven prediction method for fast-slow systems

    Science.gov (United States)

    Groth, Andreas; Chekroun, Mickael; Kondrashov, Dmitri; Ghil, Michael

    2016-04-01

    In this work, we present a prediction method for processes that exhibit a mixture of variability on low and fast scales. The method relies on combining empirical model reduction (EMR) with singular spectrum analysis (SSA). EMR is a data-driven methodology for constructing stochastic low-dimensional models that account for nonlinearity and serial correlation in the estimated noise, while SSA provides a decomposition of the complex dynamics into low-order components that capture spatio-temporal behavior on different time scales. Our study focuses on the data-driven modeling of partial observations from dynamical systems that exhibit power spectra with broad peaks. The main result in this talk is that the combination of SSA pre-filtering with EMR modeling improves, under certain circumstances, the modeling and prediction skill of such a system, as compared to a standard EMR prediction based on raw data. Specifically, it is the separation into "fast" and "slow" temporal scales by the SSA pre-filtering that achieves the improvement. We show, in particular that the resulting EMR-SSA emulators help predict intermittent behavior such as rapid transitions between specific regions of the system's phase space. This capability of the EMR-SSA prediction will be demonstrated on two low-dimensional models: the Rössler system and a Lotka-Volterra model for interspecies competition. In either case, the chaotic dynamics is produced through a Shilnikov-type mechanism and we argue that the latter seems to be an important ingredient for the good prediction skills of EMR-SSA emulators. Shilnikov-type behavior has been shown to arise in various complex geophysical fluid models, such as baroclinic quasi-geostrophic flows in the mid-latitude atmosphere and wind-driven double-gyre ocean circulation models. This pervasiveness of the Shilnikow mechanism of fast-slow transition opens interesting perspectives for the extension of the proposed EMR-SSA approach to more realistic situations.

  5. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    Science.gov (United States)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  6. Prediction Markets

    DEFF Research Database (Denmark)

    Horn, Christian Franz; Ivens, Bjørn Sven; Ohneberg, Michael

    2014-01-01

    In recent years, Prediction Markets gained growing interest as a forecasting tool among researchers as well as practitioners, which resulted in an increasing number of publications. In order to track the latest development of research, comprising the extent and focus of research, this article...... provides a comprehensive review and classification of the literature related to the topic of Prediction Markets. Overall, 316 relevant articles, published in the timeframe from 2007 through 2013, were identified and assigned to a herein presented classification scheme, differentiating between descriptive...... works, articles of theoretical nature, application-oriented studies and articles dealing with the topic of law and policy. The analysis of the research results reveals that more than half of the literature pool deals with the application and actual function tests of Prediction Markets. The results...

  7. Predicting unpredictability

    Science.gov (United States)

    Davis, Steven J.

    2018-04-01

    Analysts and markets have struggled to predict a number of phenomena, such as the rise of natural gas, in US energy markets over the past decade or so. Research shows the challenge may grow because the industry — and consequently the market — is becoming increasingly volatile.

  8. A Baroclinic Eddy Mixer: Supercritical Transformation of Compensated Eddies

    Science.gov (United States)

    Sutyrin, G.

    2016-02-01

    In contrast to many real-ocean rings and eddies, circular vortices with initial lower layer at rest tend to be highly unstable in idealized two-layer models, unless their radius is made small or the lower layer depth is made artificially large. Numerical simulations of unstable vortices with parameters typical for ocean eddies revealed strong deformations and pulsations of the vortex core in the two-layer setup due to development of corotating tripolar structures in the lower layer during their supercritical transformation. The addition of a middle layer with the uniform potential vorticity weakens vertical coupling between the upper and lower layer that enhances vortex stability and makes the vortex lifespan more realistic. Such a three-layer vortex model possesses smaller lower interface slope than the two-layer model that reduces the potential vorticity gradient in the lower layer and provides with less unstable configurations. While cyclonic eddies become only slightly deformed and look nearly circular when the middle layer with uniform potential vorticity is added, anticyclonic eddies tend to corotating and pulsating elongated states through potential vorticity stripping and stirring. Enhanced vortex stability in such three-layer setup has important implications for adequate representation of the energy transfer across scales.

  9. Internal wave emission from baroclinic jets: experimental results

    Science.gov (United States)

    Borcia, Ion D.; Rodda, Costanza; Harlander, Uwe

    2016-04-01

    Large-scale balanced flows can spontaneously radiate meso-scale inertia-gravity waves (IGWs) and are thus in fact unbalanced. While flow-dependent parameterizations for the radiation of IGWs from orographic and convective sources do exist, the situation is less developed for spontaneously emitted IGWs. Observations identify increased IGW activity in the vicinity of jet exit regions. A direct interpretation of those based on geostrophic adjustment might be tempting. However, directly applying this concept to the parameterization of spontaneous imbalance is difficult since the dynamics itself is continuously re-establishing an unbalanced flow which then sheds imbalances by GW radiation. Examining spontaneous IGW emission in the atmosphere and validating parameterization schemes confronts the scientist with particular challenges. Due to its extreme complexity, GW emission will always be embedded in the interaction of a multitude of interdependent processes, many of which are hardly detectable from analysis or campaign data. The benefits of repeated and more detailed measurements, while representing the only source of information about the real atmosphere, are limited by the non-repeatability of an atmospheric situation. The same event never occurs twice. This argues for complementary laboratory experiments, which can provide a more focused dialogue between experiment and theory. Indeed, life cycles are also examined in rotating-annulus laboratory experiments. Thus, these experiments might form a useful empirical benchmark for theoretical and modeling work that is also independent of any sort of subgrid model. In addition, the more direct correspondence between experimental and model data and the data reproducibility makes lab experiments a powerful testbed for parameterizations. Here we show first results from a small rotating annulus experiments and we will further present our new experimental facility to study wave emission from jets and fronts.

  10. The Impact of ICTs Diffusion on MDGs and Baroclinic Digital ...

    African Journals Online (AJOL)

    (b) To ascertain the ICT impact on economic growth, innovations and education ... with fully functional interactive e-learning facilities at Zimbabwe Open University. (e) To recommend a development model or a framework for economic growth ...

  11. Unification predictions

    International Nuclear Information System (INIS)

    Ghilencea, D.; Ross, G.G.; Lanzagorta, M.

    1997-07-01

    The unification of gauge couplings suggests that there is an underlying (supersymmetric) unification of the strong, electromagnetic and weak interactions. The prediction of the unification scale may be the first quantitative indication that this unification may extend to unification with gravity. We make a precise determination of these predictions for a class of models which extend the multiplet structure of the Minimal Supersymmetric Standard Model to include the heavy states expected in many Grand Unified and/or superstring theories. We show that there is a strong cancellation between the 2-loop and threshold effects. As a result the net effect is smaller than previously thought, giving a small increase in both the unification scale and the value of the strong coupling at low energies. (author). 15 refs, 5 figs

  12. Multi-scale modeling and analysis of convective boiling: towards the prediction of CHF in rod bundles

    International Nuclear Information System (INIS)

    Niceno, B.; Sato, Y.; Badillo, A.; Andreani, M.

    2010-01-01

    In this paper we describe current activities on the project Multi-Scale Modeling and Analysis of convective boiling (MSMA), conducted jointly by the Paul Scherrer Institute (PSI) and the Swiss Nuclear Utilities (Swissnuclear). The long-term aim of the MSMA project is to formulate improved closure laws for Computational Fluid Dynamics (CFD) simulations for prediction of convective boiling and eventually of the Critical Heat Flux (CHF). As boiling is controlled by the competition of numerous phenomena at various length and time scales, a multi-scale approach is employed to tackle the problem at different scales. In the MSMA project, the scales on which we focus range from the CFD scale (macro-scale), bubble size scale (meso-scale), liquid micro-layer and triple interline scale (micro-scale), and molecular scale (nano-scale). The current focus of the project is on micro- and meso- scales modeling. The numerical framework comprises a highly efficient, parallel DNS solver, the PSI-BOIL code. The code has incorporated an Immersed Boundary Method (IBM) to tackle complex geometries. For simulation of meso-scales (bubbles), we use the Constrained Interpolation Profile method: Conservative Semi-Lagrangian 2nd order (CIP-CSL2). The phase change is described either by applying conventional jump conditions at the interface, or by using the Phase Field (PF) approach. In this work, we present selected results for flows in complex geometry using the IBM, selected bubbly flow simulations using the CIP-CSL2 method and results for phase change using the PF approach. In the subsequent stage of the project, the importance of effects of nano-scale processes on the global boiling heat transfer will be evaluated. To validate the models, more experimental information will be needed in the future, so it is expected that the MSMA project will become the seed for a long-term, combined theoretical and experimental program

  13. Properties, Mechanisms and Predictability of Eddies in the Red Sea

    KAUST Repository

    Zhan, Peng

    2018-04-01

    Eddies are one of the key features of the Red Sea circulation. They are not only crucial for energy conversion among dynamics at different scales, but also for materials transport across the basin. This thesis focuses on studying the characteristics of Red Sea eddies, including their temporal and spatial properties, their energy budget, the mechanisms of their evolution, and their predictability. Remote sensing data, in-situ observations, the oceanic general circulation model, and data assimilation techniques were employed in this thesis. The eddies in the Red Sea were first identified using altimeter data by applying an improved winding-angle method, based on which the statistical properties of those eddies were derived. The results suggested that eddies occur more frequently in the central basin of the Red Sea and exhibit a significant seasonal variation. The mechanisms of the eddies’ evolution, particularly the eddy kinetic energy budget, were then investigated based on the outputs of a long-term eddy resolving numerical model configured for the Red Sea with realistic forcing. Examination of the energy budget revealed that the eddies acquire the vast majority of kinetic energy through conversion of eddy available potential energy via baroclinic instability, which is intensified during winter. The possible factors modulating the behavior of the several observed eddies in the Red Sea were then revealed by conducting a sensitivity analysis using the adjoint model. These eddies were found to exhibit different sensitivities to external forcings, suggesting different mechanisms for their evolution. This is the first known adjoint sensitivity study on specific eddy events in the Red Sea and was hitherto not previously appreciated. The last chapter examines the predictability of Red Sea eddies using an ensemble-based forecasting and assimilation system. The forecast sea surface height was used to evaluate the overall performance of the short-term eddy

  14. Predictable Medea

    Directory of Open Access Journals (Sweden)

    Elisabetta Bertolino

    2010-01-01

    Full Text Available By focusing on the tragedy of the 'unpredictable' infanticide perpetrated by Medea, the paper speculates on the possibility of a non-violent ontological subjectivity for women victims of gendered violence and whether it is possible to respond to violent actions in non-violent ways; it argues that Medea did not act in an unpredictable way, rather through the very predictable subject of resentment and violence. 'Medea' represents the story of all of us who require justice as retribution against any wrong. The presupposition is that the empowered female subjectivity of women’s rights contains the same desire of mastering others of the masculine current legal and philosophical subject. The subject of women’s rights is grounded on the emotions of resentment and retribution and refuses the categories of the private by appropriating those of the righteous, masculine and public subject. The essay opposes the essentialised stereotypes of the feminine and the maternal with an ontological approach of people as singular, corporeal, vulnerable and dependent. There is therefore an emphasis on the excluded categories of the private. Forgiveness is taken into account as a category of the private and a possibility of responding to violence with newness. A violent act is seen in relations to the community of human beings rather than through an isolated setting as in the case of the individual of human rights. In this context, forgiveness allows to risk again and being with. The result is also a rethinking of feminist actions, feminine subjectivity and of the maternal. Overall the paper opens up the Arendtian category of action and forgiveness and the Cavarerian unique and corporeal ontology of the selfhood beyond gendered stereotypes.

  15. The dynamical integrity concept for interpreting/ predicting experimental behaviour: from macro- to nano-mechanics.

    Science.gov (United States)

    Lenci, Stefano; Rega, Giuseppe; Ruzziconi, Laura

    2013-06-28

    The dynamical integrity, a new concept proposed by J.M.T. Thompson, and developed by the authors, is used to interpret experimental results. After reviewing the main issues involved in this analysis, including the proposal of a new integrity measure able to capture in an easy way the safe part of basins, attention is dedicated to two experiments, a rotating pendulum and a micro-electro-mechanical system, where the theoretical predictions are not fulfilled. These mechanical systems, the former at the macro-scale and the latter at the micro-scale, permit a comparative analysis of different mechanical and dynamical behaviours. The fact that in both cases the dynamical integrity permits one to justify the difference between experimental and theoretical results, which is the main achievement of this paper, shows the effectiveness of this new approach and suggests its use in practical situations. The men of experiment are like the ant, they only collect and use; the reasoners resemble spiders, who make cobwebs out of their own substance. But the bee takes the middle course: it gathers its material from the flowers of the garden and field, but transforms and digests it by a power of its own. Not unlike this is the true business of philosophy (science); for it neither relies solely or chiefly on the powers of the mind, nor does it take the matter which it gathers from natural history and mechanical experiments and lay up in the memory whole, as it finds it, but lays it up in the understanding altered and digested. Therefore, from a closer and purer league between these two faculties, the experimental and the rational (such as has never been made), much may be hoped. (Francis Bacon 1561-1626) But are we sure of our observational facts? Scientific men are rather fond of saying pontifically that one ought to be quite sure of one's observational facts before embarking on theory. Fortunately those who give this advice do not practice what they preach. Observation and theory get

  16. On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo

    2016-02-08

    , extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  17. Multi-Scale Analysis for Characterizing Near-Field Constituent Concentrations in the Context of a Macro-Scale Semi-Lagrangian Numerical Model

    Science.gov (United States)

    Yearsley, J. R.

    2017-12-01

    The semi-Lagrangian numerical scheme employed by RBM, a model for simulating time-dependent, one-dimensional water quality constituents in advection-dominated rivers, is highly scalable both in time and space. Although the model has been used at length scales of 150 meters and time scales of three hours, the majority of applications have been at length scales of 1/16th degree latitude/longitude (about 5 km) or greater and time scales of one day. Applications of the method at these scales has proven successful for characterizing the impacts of climate change on water temperatures in global rivers and on the vulnerability of thermoelectric power plants to changes in cooling water temperatures in large river systems. However, local effects can be very important in terms of ecosystem impacts, particularly in the case of developing mixing zones for wastewater discharges with pollutant loadings limited by regulations imposed by the Federal Water Pollution Control Act (FWPCA). Mixing zone analyses have usually been decoupled from large-scale watershed influences by developing scenarios that represent critical scenarios for external processes associated with streamflow and weather conditions . By taking advantage of the particle-tracking characteristics of the numerical scheme, RBM can provide results at any point in time within the model domain. We develop a proof of concept for locations in the river network where local impacts such as mixing zones may be important. Simulated results from the semi-Lagrangian numerical scheme are treated as input to a finite difference model of the two-dimensional diffusion equation for water quality constituents such as water temperature or toxic substances. Simulations will provide time-dependent, two-dimensional constituent concentration in the near-field in response to long-term basin-wide processes. These results could provide decision support to water quality managers for evaluating mixing zone characteristics.

  18. Interconnecting Urban Planning with Multi-Scale Urban Quality : Review of Macro Scale Urban Redevelopment Project on Micro Scale Urban Quality in Shenzhen

    NARCIS (Netherlands)

    Deng, X.

    2015-01-01

    The Shenzhen planning system has been effective in promoting economic growth through the prodigious urbanization of land. It has given priority to the ‘macro-level’ planning goals of economic growth through physical development. Questions can be raised about the physical and social outcomes from the

  19. Remote sensing based evapotranspiration and runoff modeling of agricultural, forest and urban flux sites in Denmark: From field to macro-scale

    DEFF Research Database (Denmark)

    Bøgh, E.; Poulsen, R.N.; Butts, M.

    2009-01-01

    representing agricultural, forest and urban land surfaces in physically based hydrological modeling makes it possible to reproduce much of the observed variability (48–73%) in stream flow (Q − Qb) when data and modeling is applied at an effective spatial resolution capable of representing land surface...... variability in eddy covariance latent heat fluxes. The “effective” spatial resolution needed to adopt local-scale model parameters for spatial-deterministic hydrological modeling was assessed using a high-spatial resolution (30 m) variogram analysis of the NDVI. The use of the NDVI variogram to evaluate land...

  20. Macro-scale assessment of demographic and environmental variation within genetically derived evolutionary lineages of eastern hemlock (Tsuga canadensis), an imperiled conifer of the eastern United States

    Science.gov (United States)

    Anantha M. Prasad; Kevin M. Potter

    2017-01-01

    Eastern hemlock (Tsuga canadensis) occupies a large swath of eastern North America and has historically undergone range expansion and contraction resulting in several genetically separate lineages. This conifer is currently experiencing mortality across most of its range following infestation of a non-native insect. With the goal of better...

  1. Simulation and experimental determination of the macro-scale layer thickness distribution of electrodeposited Cu-line patterns on a wafer substrate

    DEFF Research Database (Denmark)

    Pantleon, Karen; Bossche, Bart van den; Purcar, Marius

    2005-01-01

    The impact of adjacent patterned zones with different active area densities on the current density and electrodeposited layer thickness distribution over a wafer substrate is examined, both by experiment and numerical simulation. The experiments consist in running an acid copper plating process o......) approach to compute the current density distribution over the electrodes. Experimental and computed layer thickness distributions are in very good agreement.......The impact of adjacent patterned zones with different active area densities on the current density and electrodeposited layer thickness distribution over a wafer substrate is examined, both by experiment and numerical simulation. The experiments consist in running an acid copper plating process...... on the patterned wafer, and layer thickness measurements by means of X-ray fluorescence (XRF) and atomic force microscopy (AFM). The simulations are based on a potential model approach taking into account electrolyte ohmic drop and electrode polarization effects, combined to a boundary element method (BEM...

  2. Making detailed predictions makes (some) predictions worse

    Science.gov (United States)

    Kelly, Theresa F.

    In this paper, we investigate whether making detailed predictions about an event makes other predictions worse. Across 19 experiments, 10,895 participants, and 415,960 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes information that is relatively useless for predicting the winning team more readily accessible in memory and therefore incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of games will and will not be susceptible to the negative effect of making detailed predictions.

  3. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  4. Formal derivation of a 6 equation macro scale model for two-phase flows - link with the 4 equation macro scale model implemented in Flica 4; Etablissement formel d'un modele diphasique macroscopique a 6 equations - lien avec le modele macroscopique a 4 equations flica 4

    Energy Technology Data Exchange (ETDEWEB)

    Gregoire, O

    2008-07-01

    In order to simulate nuclear reactor cores, we presently use the 4 equation model implemented within FLICA4 code. This model is complemented with 2 algebraic closures for thermal disequilibrium and relative velocity between phases. Using such closures, means an 'a priori' knowledge of flows calculated in order to ensure that modelling assumptions apply. In order to improve the degree of universality to our macroscopic modelling, we propose in the report to derive a more general 6 equation model (balance equations for mass, momentum and enthalpy for each phase) for 2-phase flows. We apply the up-scaling procedure (Whitaker, 1999) classically used in porous media analysis to the statistically averaged equations (Aniel-Buchheit et al., 2003). By doing this, we apply the double-averaging procedure (Pedras and De Lemos, 2001 and Pinson et al. 2006): statistical and spatial averages. Then, using weighted averages (analogous to Favre's average) we extend the spatial averaging concept to variable density and 2-phase flows. This approach allows the global recovering of the structure of the systems of equations implemented in industrial codes. Supplementary contributions, such as dispersion, are also highlighted. Mechanical and thermal exchanges between solids and fluid are formally derived. Then, thanks to realistic simplifying assumptions, we show how it is possible to derive the original 4 equation model from the full 6 equation model. (author)

  5. Predictability of tropical cyclone events on intraseasonal timescales with the ECMWF monthly forecast model

    Science.gov (United States)

    Elsberry, Russell L.; Jordan, Mary S.; Vitart, Frederic

    2010-05-01

    The objective of this study is to provide evidence of predictability on intraseasonal time scales (10-30 days) for western North Pacific tropical cyclone formation and subsequent tracks using the 51-member ECMWF 32-day forecasts made once a week from 5 June through 25 December 2008. Ensemble storms are defined by grouping ensemble member vortices whose positions are within a specified separation distance that is equal to 180 n mi at the initial forecast time t and increases linearly to 420 n mi at Day 14 and then is constant. The 12-h track segments are calculated with a Weighted-Mean Vector Motion technique in which the weighting factor is inversely proportional to the distance from the endpoint of the previous 12-h motion vector. Seventy-six percent of the ensemble storms had five or fewer member vortices. On average, the ensemble storms begin 2.5 days before the first entry of the Joint Typhoon Warning Center (JTWC) best-track file, tend to translate too slowly in the deep tropics, and persist for longer periods over land. A strict objective matching technique with the JTWC storms is combined with a second subjective procedure that is then applied to identify nearby ensemble storms that would indicate a greater likelihood of a tropical cyclone developing in that region with that track orientation. The ensemble storms identified in the ECMWF 32-day forecasts provided guidance on intraseasonal timescales of the formations and tracks of the three strongest typhoons and two other typhoons, but not for two early season typhoons and the late season Dolphin. Four strong tropical storms were predicted consistently over Week-1 through Week-4, as was one weak tropical storm. Two other weak tropical storms, three tropical cyclones that developed from precursor baroclinic systems, and three other tropical depressions were not predicted on intraseasonal timescales. At least for the strongest tropical cyclones during the peak season, the ECMWF 32-day ensemble provides

  6. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  7. A Multiphysics Framework to Learn and Predict in Presence of Multiple Scales

    Science.gov (United States)

    Tomin, P.; Lunati, I.

    2015-12-01

    Modeling complex phenomena in the subsurface remains challenging due to the presence of multiple interacting scales, which can make it impossible to focus on purely macroscopic phenomena (relevant in most applications) and neglect the processes at the micro-scale. We present and discuss a general framework that allows us to deal with the situation in which the lack of scale separation requires the combined use of different descriptions at different scale (for instance, a pore-scale description at the micro-scale and a Darcy-like description at the macro-scale) [1,2]. The method is based on conservation principles and constructs the macro-scale problem by numerical averaging of micro-scale balance equations. By employing spatiotemporal adaptive strategies, this approach can efficiently solve large-scale problems [2,3]. In addition, being based on a numerical volume-averaging paradigm, it offers a tool to illuminate how macroscopic equations emerge from microscopic processes, to better understand the meaning of microscopic quantities, and to investigate the validity of the assumptions routinely used to construct the macro-scale problems. [1] Tomin, P., and I. Lunati, A Hybrid Multiscale Method for Two-Phase Flow in Porous Media, Journal of Computational Physics, 250, 293-307, 2013 [2] Tomin, P., and I. Lunati, Local-global splitting and spatiotemporal-adaptive Multiscale Finite Volume Method, Journal of Computational Physics, 280, 214-231, 2015 [3] Tomin, P., and I. Lunati, Spatiotemporal adaptive multiphysics simulations of drainage-imbibition cycles, Computational Geosciences, 2015 (under review)

  8. Applied predictive control

    CERN Document Server

    Sunan, Huang; Heng, Lee Tong

    2002-01-01

    The presence of considerable time delays in the dynamics of many industrial processes, leading to difficult problems in the associated closed-loop control systems, is a well-recognized phenomenon. The performance achievable in conventional feedback control systems can be significantly degraded if an industrial process has a relatively large time delay compared with the dominant time constant. Under these circumstances, advanced predictive control is necessary to improve the performance of the control system significantly. The book is a focused treatment of the subject matter, including the fundamentals and some state-of-the-art developments in the field of predictive control. Three main schemes for advanced predictive control are addressed in this book: • Smith Predictive Control; • Generalised Predictive Control; • a form of predictive control based on Finite Spectrum Assignment. A substantial part of the book addresses application issues in predictive control, providing several interesting case studie...

  9. Predictable or not predictable? The MOV question

    International Nuclear Information System (INIS)

    Thibault, C.L.; Matzkiw, J.N.; Anderson, J.W.; Kessler, D.W.

    1994-01-01

    Over the past 8 years, the nuclear industry has struggled to understand the dynamic phenomena experienced during motor-operated valve (MOV) operation under differing flow conditions. For some valves and designs, their operational functionality has been found to be predictable; for others, unpredictable. Although much has been accomplished over this period of time, especially on modeling valve dynamics, the unpredictability of many valves and designs still exists. A few valve manufacturers are focusing on improving design and fabrication techniques to enhance product reliability and predictability. However, this approach does not address these issues for installed and inpredictable valves. This paper presents some of the more promising techniques that Wyle Laboratories has explored with potential for transforming unpredictable valves to predictable valves and for retrofitting installed MOVs. These techniques include optimized valve tolerancing, surrogated material evaluation, and enhanced surface treatments

  10. Predictive systems ecology.

    Science.gov (United States)

    Evans, Matthew R; Bithell, Mike; Cornell, Stephen J; Dall, Sasha R X; Díaz, Sandra; Emmott, Stephen; Ernande, Bruno; Grimm, Volker; Hodgson, David J; Lewis, Simon L; Mace, Georgina M; Morecroft, Michael; Moustakas, Aristides; Murphy, Eugene; Newbold, Tim; Norris, K J; Petchey, Owen; Smith, Matthew; Travis, Justin M J; Benton, Tim G

    2013-11-22

    Human societies, and their well-being, depend to a significant extent on the state of the ecosystems that surround them. These ecosystems are changing rapidly usually in response to anthropogenic changes in the environment. To determine the likely impact of environmental change on ecosystems and the best ways to manage them, it would be desirable to be able to predict their future states. We present a proposal to develop the paradigm of predictive systems ecology, explicitly to understand and predict the properties and behaviour of ecological systems. We discuss the necessary and desirable features of predictive systems ecology models. There are places where predictive systems ecology is already being practised and we summarize a range of terrestrial and marine examples. Significant challenges remain but we suggest that ecology would benefit both as a scientific discipline and increase its impact in society if it were to embrace the need to become more predictive.

  11. Seismology for rockburst prediction.

    CSIR Research Space (South Africa)

    De Beer, W

    2000-02-01

    Full Text Available project GAP409 presents a method (SOOTHSAY) for predicting larger mining induced seismic events in gold mines, as well as a pattern recognition algorithm (INDICATOR) for characterising the seismic response of rock to mining and inferring future... State. Defining the time series of a specific function on a catalogue as a prediction strategy, the algorithm currently has a success rate of 53% and 65%, respectively, of large events claimed as being predicted in these two cases, with uncertainties...

  12. Predictability of Conversation Partners

    Science.gov (United States)

    Takaguchi, Taro; Nakamura, Mitsuhiro; Sato, Nobuo; Yano, Kazuo; Masuda, Naoki

    2011-08-01

    Recent developments in sensing technologies have enabled us to examine the nature of human social behavior in greater detail. By applying an information-theoretic method to the spatiotemporal data of cell-phone locations, [C. Song , ScienceSCIEAS0036-8075 327, 1018 (2010)] found that human mobility patterns are remarkably predictable. Inspired by their work, we address a similar predictability question in a different kind of human social activity: conversation events. The predictability in the sequence of one’s conversation partners is defined as the degree to which one’s next conversation partner can be predicted given the current partner. We quantify this predictability by using the mutual information. We examine the predictability of conversation events for each individual using the longitudinal data of face-to-face interactions collected from two company offices in Japan. Each subject wears a name tag equipped with an infrared sensor node, and conversation events are marked when signals are exchanged between sensor nodes in close proximity. We find that the conversation events are predictable to a certain extent; knowing the current partner decreases the uncertainty about the next partner by 28.4% on average. Much of the predictability is explained by long-tailed distributions of interevent intervals. However, a predictability also exists in the data, apart from the contribution of their long-tailed nature. In addition, an individual’s predictability is correlated with the position of the individual in the static social network derived from the data. Individuals confined in a community—in the sense of an abundance of surrounding triangles—tend to have low predictability, and those bridging different communities tend to have high predictability.

  13. Predictability of Conversation Partners

    Directory of Open Access Journals (Sweden)

    Taro Takaguchi

    2011-09-01

    Full Text Available Recent developments in sensing technologies have enabled us to examine the nature of human social behavior in greater detail. By applying an information-theoretic method to the spatiotemporal data of cell-phone locations, [C. Song et al., Science 327, 1018 (2010SCIEAS0036-8075] found that human mobility patterns are remarkably predictable. Inspired by their work, we address a similar predictability question in a different kind of human social activity: conversation events. The predictability in the sequence of one’s conversation partners is defined as the degree to which one’s next conversation partner can be predicted given the current partner. We quantify this predictability by using the mutual information. We examine the predictability of conversation events for each individual using the longitudinal data of face-to-face interactions collected from two company offices in Japan. Each subject wears a name tag equipped with an infrared sensor node, and conversation events are marked when signals are exchanged between sensor nodes in close proximity. We find that the conversation events are predictable to a certain extent; knowing the current partner decreases the uncertainty about the next partner by 28.4% on average. Much of the predictability is explained by long-tailed distributions of interevent intervals. However, a predictability also exists in the data, apart from the contribution of their long-tailed nature. In addition, an individual’s predictability is correlated with the position of the individual in the static social network derived from the data. Individuals confined in a community—in the sense of an abundance of surrounding triangles—tend to have low predictability, and those bridging different communities tend to have high predictability.

  14. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...... compare the worst-case execution time bounds of different architectures....

  15. Predicting scholars' scientific impact.

    Directory of Open Access Journals (Sweden)

    Amin Mazloumian

    Full Text Available We tested the underlying assumption that citation counts are reliable predictors of future success, analyzing complete citation data on the careers of ~150,000 scientists. Our results show that i among all citation indicators, the annual citations at the time of prediction is the best predictor of future citations, ii future citations of a scientist's published papers can be predicted accurately (r(2 = 0.80 for a 1-year prediction, P<0.001 but iii future citations of future work are hardly predictable.

  16. The Prediction Value

    NARCIS (Netherlands)

    Koster, M.; Kurz, S.; Lindner, I.; Napel, S.

    2013-01-01

    We introduce the prediction value (PV) as a measure of players’ informational importance in probabilistic TU games. The latter combine a standard TU game and a probability distribution over the set of coalitions. Player i’s prediction value equals the difference between the conditional expectations

  17. Predictability of Stock Returns

    Directory of Open Access Journals (Sweden)

    Ahmet Sekreter

    2017-06-01

    Full Text Available Predictability of stock returns has been shown by empirical studies over time. This article collects the most important theories on forecasting stock returns and investigates the factors that affecting behavior of the stocks’ prices and the market as a whole. Estimation of the factors and the way of estimation are the key issues of predictability of stock returns.

  18. Predicting AD conversion

    DEFF Research Database (Denmark)

    Liu, Yawu; Mattila, Jussi; Ruiz, Miguel �ngel Mu�oz

    2013-01-01

    To compare the accuracies of predicting AD conversion by using a decision support system (PredictAD tool) and current research criteria of prodromal AD as identified by combinations of episodic memory impairment of hippocampal type and visual assessment of medial temporal lobe atrophy (MTA) on MRI...

  19. Predicting Free Recalls

    Science.gov (United States)

    Laming, Donald

    2006-01-01

    This article reports some calculations on free-recall data from B. Murdock and J. Metcalfe (1978), with vocal rehearsal during the presentation of a list. Given the sequence of vocalizations, with the stimuli inserted in their proper places, it is possible to predict the subsequent sequence of recalls--the predictions taking the form of a…

  20. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  1. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  2. Ground motion predictions

    Energy Technology Data Exchange (ETDEWEB)

    Loux, P C [Environmental Research Corporation, Alexandria, VA (United States)

    1969-07-01

    Nuclear generated ground motion is defined and then related to the physical parameters that cause it. Techniques employed for prediction of ground motion peak amplitude, frequency spectra and response spectra are explored, with initial emphasis on the analysis of data collected at the Nevada Test Site (NTS). NTS postshot measurements are compared with pre-shot predictions. Applicability of these techniques to new areas, for example, Plowshare sites, must be questioned. Fortunately, the Atomic Energy Commission is sponsoring complementary studies to improve prediction capabilities primarily in new locations outside the NTS region. Some of these are discussed in the light of anomalous seismic behavior, and comparisons are given showing theoretical versus experimental results. In conclusion, current ground motion prediction techniques are applied to events off the NTS. Predictions are compared with measurements for the event Faultless and for the Plowshare events, Gasbuggy, Cabriolet, and Buggy I. (author)

  3. Ground motion predictions

    International Nuclear Information System (INIS)

    Loux, P.C.

    1969-01-01

    Nuclear generated ground motion is defined and then related to the physical parameters that cause it. Techniques employed for prediction of ground motion peak amplitude, frequency spectra and response spectra are explored, with initial emphasis on the analysis of data collected at the Nevada Test Site (NTS). NTS postshot measurements are compared with pre-shot predictions. Applicability of these techniques to new areas, for example, Plowshare sites, must be questioned. Fortunately, the Atomic Energy Commission is sponsoring complementary studies to improve prediction capabilities primarily in new locations outside the NTS region. Some of these are discussed in the light of anomalous seismic behavior, and comparisons are given showing theoretical versus experimental results. In conclusion, current ground motion prediction techniques are applied to events off the NTS. Predictions are compared with measurements for the event Faultless and for the Plowshare events, Gasbuggy, Cabriolet, and Buggy I. (author)

  4. Structural prediction in aphasia

    Directory of Open Access Journals (Sweden)

    Tessa Warren

    2015-05-01

    Full Text Available There is considerable evidence that young healthy comprehenders predict the structure of upcoming material, and that their processing is facilitated when they encounter material matching those predictions (e.g., Staub & Clifton, 2006; Yoshida, Dickey & Sturt, 2013. However, less is known about structural prediction in aphasia. There is evidence that lexical prediction may be spared in aphasia (Dickey et al., 2014; Love & Webb, 1977; cf. Mack et al, 2013. However, predictive mechanisms supporting facilitated lexical access may not necessarily support structural facilitation. Given that many people with aphasia (PWA exhibit syntactic deficits (e.g. Goodglass, 1993, PWA with such impairments may not engage in structural prediction. However, recent evidence suggests that some PWA may indeed predict upcoming structure (Hanne, Burchert, De Bleser, & Vashishth, 2015. Hanne et al. tracked the eyes of PWA (n=8 with sentence-comprehension deficits while they listened to reversible subject-verb-object (SVO and object-verb-subject (OVS sentences in German, in a sentence-picture matching task. Hanne et al. manipulated case and number marking to disambiguate the sentences’ structure. Gazes to an OVS or SVO picture during the unfolding of a sentence were assumed to indicate prediction of the structure congruent with that picture. According to this measure, the PWA’s structural prediction was impaired compared to controls, but they did successfully predict upcoming structure when morphosyntactic cues were strong and unambiguous. Hanne et al.’s visual-world evidence is suggestive, but their forced-choice sentence-picture matching task places tight constraints on possible structural predictions. Clearer evidence of structural prediction would come from paradigms where the content of upcoming material is not as constrained. The current study used self-paced reading study to examine structural prediction among PWA in less constrained contexts. PWA (n=17 who

  5. Prediction of bull fertility.

    Science.gov (United States)

    Utt, Matthew D

    2016-06-01

    Prediction of male fertility is an often sought-after endeavor for many species of domestic animals. This review will primarily focus on providing some examples of dependent and independent variables to stimulate thought about the approach and methodology of identifying the most appropriate of those variables to predict bull (bovine) fertility. Although the list of variables will continue to grow with advancements in science, the principles behind making predictions will likely not change significantly. The basic principle of prediction requires identifying a dependent variable that is an estimate of fertility and an independent variable or variables that may be useful in predicting the fertility estimate. Fertility estimates vary in which parts of the process leading to conception that they infer about and the amount of variation that influences the estimate and the uncertainty thereof. The list of potential independent variables can be divided into competence of sperm based on their performance in bioassays or direct measurement of sperm attributes. A good prediction will use a sample population of bulls that is representative of the population to which an inference will be made. Both dependent and independent variables should have a dynamic range in their values. Careful selection of independent variables includes reasonable measurement repeatability and minimal correlation among variables. Proper estimation and having an appreciation of the degree of uncertainty of dependent and independent variables are crucial for using predictions to make decisions regarding bull fertility. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  7. Prediction ranges. Annual review

    Energy Technology Data Exchange (ETDEWEB)

    Parker, J.C.; Tharp, W.H.; Spiro, P.S.; Keng, K.; Angastiniotis, M.; Hachey, L.T.

    1988-01-01

    Prediction ranges equip the planner with one more tool for improved assessment of the outcome of a course of action. One of their major uses is in financial evaluations, where corporate policy requires the performance of uncertainty analysis for large projects. This report gives an overview of the uses of prediction ranges, with examples; and risks and uncertainties in growth, inflation, and interest and exchange rates. Prediction ranges and standard deviations of 80% and 50% probability are given for various economic indicators in Ontario, Canada, and the USA, as well as for foreign exchange rates and Ontario Hydro interest rates. An explanatory note on probability is also included. 23 tabs.

  8. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  9. Protein Sorting Prediction

    DEFF Research Database (Denmark)

    Nielsen, Henrik

    2017-01-01

    and drawbacks of each of these approaches is described through many examples of methods that predict secretion, integration into membranes, or subcellular locations in general. The aim of this chapter is to provide a user-level introduction to the field with a minimum of computational theory.......Many computational methods are available for predicting protein sorting in bacteria. When comparing them, it is important to know that they can be grouped into three fundamentally different approaches: signal-based, global-property-based and homology-based prediction. In this chapter, the strengths...

  10. 'Red Flag' Predictions

    DEFF Research Database (Denmark)

    Hallin, Carina Antonia; Andersen, Torben Juul; Tveterås, Sigbjørn

    -generation prediction markets and outline its unique features as a third-generation prediction market. It is argued that frontline employees gain deep insights when they execute operational activities on an ongoing basis in the organization. The experiential learning from close interaction with internal and external......This conceptual article introduces a new way to predict firm performance based on aggregation of sensing among frontline employees about changes in operational capabilities to update strategic action plans and generate innovations. We frame the approach in the context of first- and second...

  11. Towards Predictive Association Theories

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Tsivintzelis, Ioannis; Michelsen, Michael Locht

    2011-01-01

    Association equations of state like SAFT, CPA and NRHB have been previously applied to many complex mixtures. In this work we focus on two of these models, the CPA and the NRHB equations of state and the emphasis is on the analysis of their predictive capabilities for a wide range of applications....... We use the term predictive in two situations: (i) with no use of binary interaction parameters, and (ii) multicomponent calculations using binary interaction parameters based solely on binary data. It is shown that the CPA equation of state can satisfactorily predict CO2–water–glycols–alkanes VLE...

  12. Prediction of intermetallic compounds

    International Nuclear Information System (INIS)

    Burkhanov, Gennady S; Kiselyova, N N

    2009-01-01

    The problems of predicting not yet synthesized intermetallic compounds are discussed. It is noted that the use of classical physicochemical analysis in the study of multicomponent metallic systems is faced with the complexity of presenting multidimensional phase diagrams. One way of predicting new intermetallics with specified properties is the use of modern processing technology with application of teaching of image recognition by the computer. The algorithms used most often in these methods are briefly considered and the efficiency of their use for predicting new compounds is demonstrated.

  13. Filtering and prediction

    CERN Document Server

    Fristedt, B; Krylov, N

    2007-01-01

    Filtering and prediction is about observing moving objects when the observations are corrupted by random errors. The main focus is then on filtering out the errors and extracting from the observations the most precise information about the object, which itself may or may not be moving in a somewhat random fashion. Next comes the prediction step where, using information about the past behavior of the object, one tries to predict its future path. The first three chapters of the book deal with discrete probability spaces, random variables, conditioning, Markov chains, and filtering of discrete Markov chains. The next three chapters deal with the more sophisticated notions of conditioning in nondiscrete situations, filtering of continuous-space Markov chains, and of Wiener process. Filtering and prediction of stationary sequences is discussed in the last two chapters. The authors believe that they have succeeded in presenting necessary ideas in an elementary manner without sacrificing the rigor too much. Such rig...

  14. CMAQ predicted concentration files

    Data.gov (United States)

    U.S. Environmental Protection Agency — CMAQ predicted ozone. This dataset is associated with the following publication: Gantt, B., G. Sarwar, J. Xing, H. Simon, D. Schwede, B. Hutzell, R. Mathur, and A....

  15. Methane prediction in collieries

    CSIR Research Space (South Africa)

    Creedy, DP

    1999-06-01

    Full Text Available The primary aim of the project was to assess the current status of research on methane emission prediction for collieries in South Africa in comparison with methods used and advances achieved elsewhere in the world....

  16. Climate Prediction Center - Outlooks

    Science.gov (United States)

    Weather Service NWS logo - Click to go to the NWS home page Climate Prediction Center Home Site Map News Web resources and services. HOME > Outreach > Publications > Climate Diagnostics Bulletin Climate Diagnostics Bulletin - Tropics Climate Diagnostics Bulletin - Forecast Climate Diagnostics

  17. CMAQ predicted concentration files

    Data.gov (United States)

    U.S. Environmental Protection Agency — model predicted concentrations. This dataset is associated with the following publication: Muñiz-Unamunzaga, M., R. Borge, G. Sarwar, B. Gantt, D. de la Paz, C....

  18. Comparing Spatial Predictions

    KAUST Repository

    Hering, Amanda S.; Genton, Marc G.

    2011-01-01

    Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial predictions produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis

  19. Genomic prediction using subsampling

    OpenAIRE

    Xavier, Alencar; Xu, Shizhong; Muir, William; Rainey, Katy Martin

    2017-01-01

    Background Genome-wide assisted selection is a critical tool for the?genetic improvement of plants and animals. Whole-genome regression models in Bayesian framework represent the main family of prediction methods. Fitting such models with a large number of observations involves a prohibitive computational burden. We propose the use of subsampling bootstrap Markov chain in genomic prediction. Such method consists of fitting whole-genome regression models by subsampling observations in each rou...

  20. Predicting Online Purchasing Behavior

    OpenAIRE

    W.R BUCKINX; D. VAN DEN POEL

    2003-01-01

    This empirical study investigates the contribution of different types of predictors to the purchasing behaviour at an online store. We use logit modelling to predict whether or not a purchase is made during the next visit to the website using both forward and backward variable-selection techniques, as well as Furnival and Wilson’s global score search algorithm to find the best subset of predictors. We contribute to the literature by using variables from four different categories in predicting...

  1. Empirical Flutter Prediction Method.

    Science.gov (United States)

    1988-03-05

    been used in this way to discover species or subspecies of animals, and to discover different types of voter or comsumer requiring different persuasions...respect to behavior or performance or response variables. Once this were done, corresponding clusters might be sought among descriptive or predictive or...jump in a response. The first sort of usage does not apply to the flutter prediction problem. Here the types of behavior are the different kinds of

  2. Stuck pipe prediction

    KAUST Repository

    Alzahrani, Majed

    2016-03-10

    Disclosed are various embodiments for a prediction application to predict a stuck pipe. A linear regression model is generated from hook load readings at corresponding bit depths. A current hook load reading at a current bit depth is compared with a normal hook load reading from the linear regression model. A current hook load greater than a normal hook load for a given bit depth indicates the likelihood of a stuck pipe.

  3. Stuck pipe prediction

    KAUST Repository

    Alzahrani, Majed; Alsolami, Fawaz; Chikalov, Igor; Algharbi, Salem; Aboudi, Faisal; Khudiri, Musab

    2016-01-01

    Disclosed are various embodiments for a prediction application to predict a stuck pipe. A linear regression model is generated from hook load readings at corresponding bit depths. A current hook load reading at a current bit depth is compared with a normal hook load reading from the linear regression model. A current hook load greater than a normal hook load for a given bit depth indicates the likelihood of a stuck pipe.

  4. Genomic prediction using subsampling.

    Science.gov (United States)

    Xavier, Alencar; Xu, Shizhong; Muir, William; Rainey, Katy Martin

    2017-03-24

    Genome-wide assisted selection is a critical tool for the genetic improvement of plants and animals. Whole-genome regression models in Bayesian framework represent the main family of prediction methods. Fitting such models with a large number of observations involves a prohibitive computational burden. We propose the use of subsampling bootstrap Markov chain in genomic prediction. Such method consists of fitting whole-genome regression models by subsampling observations in each round of a Markov Chain Monte Carlo. We evaluated the effect of subsampling bootstrap on prediction and computational parameters. Across datasets, we observed an optimal subsampling proportion of observations around 50% with replacement, and around 33% without replacement. Subsampling provided a substantial decrease in computation time, reducing the time to fit the model by half. On average, losses on predictive properties imposed by subsampling were negligible, usually below 1%. For each dataset, an optimal subsampling point that improves prediction properties was observed, but the improvements were also negligible. Combining subsampling with Gibbs sampling is an interesting ensemble algorithm. The investigation indicates that the subsampling bootstrap Markov chain algorithm substantially reduces computational burden associated with model fitting, and it may slightly enhance prediction properties.

  5. Deep Visual Attention Prediction

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing

    2018-05-01

    In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

  6. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  7. Transionospheric propagation predictions

    Science.gov (United States)

    Klobucher, J. A.; Basu, S.; Basu, S.; Bernhardt, P. A.; Davies, K.; Donatelli, D. E.; Fremouw, E. J.; Goodman, J. M.; Hartmann, G. K.; Leitinger, R.

    1979-01-01

    The current status and future prospects of the capability to make transionospheric propagation predictions are addressed, highlighting the effects of the ionized media, which dominate for frequencies below 1 to 3 GHz, depending upon the state of the ionosphere and the elevation angle through the Earth-space path. The primary concerns are the predictions of time delay of signal modulation (group path delay) and of radio wave scintillation. Progress in these areas is strongly tied to knowledge of variable structures in the ionosphere ranging from the large scale (thousands of kilometers in horizontal extent) to the fine scale (kilometer size). Ionospheric variability and the relative importance of various mechanisms responsible for the time histories observed in total electron content (TEC), proportional to signal group delay, and in irregularity formation are discussed in terms of capability to make both short and long term predictions. The data base upon which predictions are made is examined for its adequacy, and the prospects for prediction improvements by more theoretical studies as well as by increasing the available statistical data base are examined.

  8. Predictable grammatical constructions

    DEFF Research Database (Denmark)

    Lucas, Sandra

    2015-01-01

    My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting that these p......My aim in this paper is to provide evidence from diachronic linguistics for the view that some predictable units are entrenched in grammar and consequently in human cognition, in a way that makes them functionally and structurally equal to nonpredictable grammatical units, suggesting...... that these predictable units should be considered grammatical constructions on a par with the nonpredictable constructions. Frequency has usually been seen as the only possible argument speaking in favor of viewing some formally and semantically fully predictable units as grammatical constructions. However, this paper...... semantically and formally predictable. Despite this difference, [méllo INF], like the other future periphrases, seems to be highly entrenched in the cognition (and grammar) of Early Medieval Greek language users, and consequently a grammatical construction. The syntactic evidence speaking in favor of [méllo...

  9. Essays on Earnings Predictability

    DEFF Research Database (Denmark)

    Bruun, Mark

    This dissertation addresses the prediction of corporate earnings. The thesis aims to examine whether the degree of precision in earnings forecasts can be increased by basing them on historical financial ratios. Furthermore, the intent of the dissertation is to analyze whether accounting standards...... forecasts are not more accurate than the simpler forecasts based on a historical timeseries of earnings. Secondly, the dissertation shows how accounting standards affect analysts’ earnings predictions. Accounting conservatism contributes to a more volatile earnings process, which lowers the accuracy...... of analysts’ earnings forecasts. Furthermore, the dissertation shows how the stock market’s reaction to the disclosure of information about corporate earnings depends on how well corporate earnings can be predicted. The dissertation indicates that the stock market’s reaction to the disclosure of earnings...

  10. Pulverized coal devolatilization prediction

    International Nuclear Information System (INIS)

    Rojas, Andres F; Barraza, Juan M

    2008-01-01

    The aim of this study was to predict the two bituminous coals devolatilization at low rate of heating (50 Celsius degrade/min), with program FG-DVC (functional group Depolymerization. Vaporization and crosslinking), and to compare the devolatilization profiles predicted by program FG-DVC, which are obtained in the thermogravimetric analyzer. It was also study the volatile liberation at (10 4 k/s) in a drop-tube furnace. The tar, methane, carbon monoxide, and carbon dioxide, formation rate profiles, and the hydrogen, oxygen, nitrogen and sulphur, elemental distribution in the devolatilization products by FG-DVC program at low rate of heating was obtained; and the liberation volatile and R factor at high rate of heating was calculated. it was found that the program predicts the bituminous coals devolatilization at low rate heating, at high rate heating, a volatile liberation around 30% was obtained

  11. Predicting Ideological Prejudice.

    Science.gov (United States)

    Brandt, Mark J

    2017-06-01

    A major shortcoming of current models of ideological prejudice is that although they can anticipate the direction of the association between participants' ideology and their prejudice against a range of target groups, they cannot predict the size of this association. I developed and tested models that can make specific size predictions for this association. A quantitative model that used the perceived ideology of the target group as the primary predictor of the ideology-prejudice relationship was developed with a representative sample of Americans ( N = 4,940) and tested against models using the perceived status of and choice to belong to the target group as predictors. In four studies (total N = 2,093), ideology-prejudice associations were estimated, and these observed estimates were compared with the models' predictions. The model that was based only on perceived ideology was the most parsimonious with the smallest errors.

  12. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  13. Tide Predictions, California, 2014, NOAA

    Data.gov (United States)

    U.S. Environmental Protection Agency — The predictions from the web based NOAA Tide Predictions are based upon the latest information available as of the date of the user's request. Tide predictions...

  14. Predictive maintenance primer

    International Nuclear Information System (INIS)

    Flude, J.W.; Nicholas, J.R.

    1991-04-01

    This Predictive Maintenance Primer provides utility plant personnel with a single-source reference to predictive maintenance analysis methods and technologies used successfully by utilities and other industries. It is intended to be a ready reference to personnel considering starting, expanding or improving a predictive maintenance program. This Primer includes a discussion of various analysis methods and how they overlap and interrelate. Additionally, eighteen predictive maintenance technologies are discussed in sufficient detail for the user to evaluate the potential of each technology for specific applications. This document is designed to allow inclusion of additional technologies in the future. To gather the information necessary to create this initial Primer the Nuclear Maintenance Applications Center (NMAC) collected experience data from eighteen utilities plus other industry and government sources. NMAC also contacted equipment manufacturers for information pertaining to equipment utilization, maintenance, and technical specifications. The Primer includes a discussion of six methods used by analysts to study predictive maintenance data. These are: trend analysis; pattern recognition; correlation; test against limits or ranges; relative comparison data; and statistical process analysis. Following the analysis methods discussions are detailed descriptions for eighteen technologies analysts have found useful for predictive maintenance programs at power plants and other industrial facilities. Each technology subchapter has a description of the operating principles involved in the technology, a listing of plant equipment where the technology can be applied, and a general description of the monitoring equipment. Additionally, these descriptions include a discussion of results obtained from actual equipment users and preferred analysis techniques to be used on data obtained from the technology. 5 refs., 30 figs

  15. Predicting tile drainage discharge

    DEFF Research Database (Denmark)

    Iversen, Bo Vangsø; Kjærgaard, Charlotte; Petersen, Rasmus Jes

    used in the analysis. For the dynamic modelling, a simple linear reservoir model was used where different outlets in the model represented tile drain as well as groundwater discharge outputs. This modelling was based on daily measured tile drain discharge values. The statistical predictive model...... was based on a polynomial regression predicting yearly tile drain discharge values using site specific parameters such as soil type, catchment topography, etc. as predictors. Values of calibrated model parameters from the dynamic modelling were compared to the same site specific parameter as used...

  16. Linguistic Structure Prediction

    CERN Document Server

    Smith, Noah A

    2011-01-01

    A major part of natural language processing now depends on the use of text data to build linguistic analyzers. We consider statistical, computational approaches to modeling linguistic structure. We seek to unify across many approaches and many kinds of linguistic structures. Assuming a basic understanding of natural language processing and/or machine learning, we seek to bridge the gap between the two fields. Approaches to decoding (i.e., carrying out linguistic structure prediction) and supervised and unsupervised learning of models that predict discrete structures as outputs are the focus. W

  17. Predicting Anthracycline Benefit

    DEFF Research Database (Denmark)

    Bartlett, John M S; McConkey, Christopher C; Munro, Alison F

    2015-01-01

    PURPOSE: Evidence supporting the clinical utility of predictive biomarkers of anthracycline activity is weak, with a recent meta-analysis failing to provide strong evidence for either HER2 or TOP2A. Having previously shown that duplication of chromosome 17 pericentromeric alpha satellite as measu......PURPOSE: Evidence supporting the clinical utility of predictive biomarkers of anthracycline activity is weak, with a recent meta-analysis failing to provide strong evidence for either HER2 or TOP2A. Having previously shown that duplication of chromosome 17 pericentromeric alpha satellite...

  18. Prediction of Antibody Epitopes

    DEFF Research Database (Denmark)

    Nielsen, Morten; Marcatili, Paolo

    2015-01-01

    Antibodies recognize their cognate antigens in a precise and effective way. In order to do so, they target regions of the antigenic molecules that have specific features such as large exposed areas, presence of charged or polar atoms, specific secondary structure elements, and lack of similarity...... to self-proteins. Given the sequence or the structure of a protein of interest, several methods exploit such features to predict the residues that are more likely to be recognized by an immunoglobulin.Here, we present two methods (BepiPred and DiscoTope) to predict linear and discontinuous antibody...

  19. Basis of predictive mycology.

    Science.gov (United States)

    Dantigny, Philippe; Guilmart, Audrey; Bensoussan, Maurice

    2005-04-15

    For over 20 years, predictive microbiology focused on food-pathogenic bacteria. Few studies concerned modelling fungal development. On one hand, most of food mycologists are not familiar with modelling techniques; on the other hand, people involved in modelling are developing tools dedicated to bacteria. Therefore, there is a tendency to extend the use of models that were developed for bacteria to moulds. However, some mould specificities should be taken into account. The use of specific models for predicting germination and growth of fungi was advocated previously []. This paper provides a short review of fungal modelling studies.

  20. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  1. Steering smog prediction

    NARCIS (Netherlands)

    R. van Liere (Robert); J.J. van Wijk (Jack)

    1997-01-01

    textabstractThe use of computational steering for smog prediction is described. This application is representative for many underlying issues found in steering high performance applications: high computing times, large data sets, and many different input parameters. After a short description of the

  2. Predicting Sustainable Work Behavior

    DEFF Research Database (Denmark)

    Hald, Kim Sundtoft

    2013-01-01

    Sustainable work behavior is an important issue for operations managers – it has implications for most outcomes of OM. This research explores the antecedents of sustainable work behavior. It revisits and extends the sociotechnical model developed by Brown et al. (2000) on predicting safe behavior...

  3. Gate valve performance prediction

    International Nuclear Information System (INIS)

    Harrison, D.H.; Damerell, P.S.; Wang, J.K.; Kalsi, M.S.; Wolfe, K.J.

    1994-01-01

    The Electric Power Research Institute is carrying out a program to improve the performance prediction methods for motor-operated valves. As part of this program, an analytical method to predict the stem thrust required to stroke a gate valve has been developed and has been assessed against data from gate valve tests. The method accounts for the loads applied to the disc by fluid flow and for the detailed mechanical interaction of the stem, disc, guides, and seats. To support development of the method, two separate-effects test programs were carried out. One test program determined friction coefficients for contacts between gate valve parts by using material specimens in controlled environments. The other test program investigated the interaction of the stem, disc, guides, and seat using a special fixture with full-sized gate valve parts. The method has been assessed against flow-loop and in-plant test data. These tests include valve sizes from 3 to 18 in. and cover a considerable range of flow, temperature, and differential pressure. Stem thrust predictions for the method bound measured results. In some cases, the bounding predictions are substantially higher than the stem loads required for valve operation, as a result of the bounding nature of the friction coefficients in the method

  4. Prediction method abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-31

    This conference was held December 4--8, 1994 in Asilomar, California. The purpose of this meeting was to provide a forum for exchange of state-of-the-art information concerning the prediction of protein structure. Attention if focused on the following: comparative modeling; sequence to fold assignment; and ab initio folding.

  5. Predicting Intrinsic Motivation

    Science.gov (United States)

    Martens, Rob; Kirschner, Paul A.

    2004-01-01

    Intrinsic motivation can be predicted from participants' perceptions of the social environment and the task environment (Ryan & Deci, 2000)in terms of control, relatedness and competence. To determine the degree of independence of these factors 251 students in higher vocational education (physiotherapy and hotel management) indicated the…

  6. Predicting visibility of aircraft.

    Directory of Open Access Journals (Sweden)

    Andrew Watson

    Full Text Available Visual detection of aircraft by human observers is an important element of aviation safety. To assess and ensure safety, it would be useful to be able to be able to predict the visibility, to a human observer, of an aircraft of specified size, shape, distance, and coloration. Examples include assuring safe separation among aircraft and between aircraft and unmanned vehicles, design of airport control towers, and efforts to enhance or suppress the visibility of military and rescue vehicles. We have recently developed a simple metric of pattern visibility, the Spatial Standard Observer (SSO. In this report we examine whether the SSO can predict visibility of simulated aircraft images. We constructed a set of aircraft images from three-dimensional computer graphic models, and measured the luminance contrast threshold for each image from three human observers. The data were well predicted by the SSO. Finally, we show how to use the SSO to predict visibility range for aircraft of arbitrary size, shape, distance, and coloration.

  7. Climate Prediction Center

    Science.gov (United States)

    Weather Service NWS logo - Click to go to the NWS home page Climate Prediction Center Home Site Map News Organization Enter Search Term(s): Search Search the CPC Go NCEP Quarterly Newsletter Climate Highlights U.S Climate-Weather El Niño/La Niña MJO Blocking AAO, AO, NAO, PNA Climatology Global Monsoons Expert

  8. Predicting Commissary Store Success

    Science.gov (United States)

    2014-12-01

    stores or if it is possible to predict that success. Multiple studies of private commercial grocery consumer preferences , habits and demographics have...appropriate number of competitors due to the nature of international cultures and consumer preferences . 2. Missing Data Four of the remaining stores

  9. Predicting Job Satisfaction.

    Science.gov (United States)

    Blai, Boris, Jr.

    Psychological theories about human motivation and accommodation to environment can be used to achieve a better understanding of the human factors that function in the work environment. Maslow's theory of human motivational behavior provided a theoretical framework for an empirically-derived method to predict job satisfaction and explore the…

  10. Ocean Prediction Center

    Science.gov (United States)

    Social Media Facebook Twitter YouTube Search Search For Go NWS All NOAA Weather Analysis & Forecasts of Commerce Ocean Prediction Center National Oceanic and Atmospheric Administration Analysis & Unified Surface Analysis Ocean Ocean Products Ice & Icebergs NIC Ice Products NAIS Iceberg Analysis

  11. Predicting Reasoning from Memory

    Science.gov (United States)

    Heit, Evan; Hayes, Brett K.

    2011-01-01

    In an effort to assess the relations between reasoning and memory, in 8 experiments, the authors examined how well responses on an inductive reasoning task are predicted from responses on a recognition memory task for the same picture stimuli. Across several experimental manipulations, such as varying study time, presentation frequency, and the…

  12. Predicting coronary heart disease

    DEFF Research Database (Denmark)

    Sillesen, Henrik; Fuster, Valentin

    2012-01-01

    Atherosclerosis is the leading cause of death and disabling disease. Whereas risk factors are well known and constitute therapeutic targets, they are not useful for prediction of risk of future myocardial infarction, stroke, or death. Therefore, methods to identify atherosclerosis itself have bee...

  13. ANTHROPOMETRIC PREDICTIVE EQUATIONS FOR ...

    African Journals Online (AJOL)

    Keywords: Anthropometry, Predictive Equations, Percentage Body Fat, Nigerian Women, Bioelectric Impedance ... such as Asians and Indians (Pranav et al., 2009), ... size (n) of at least 3o is adjudged as sufficient for the ..... of people, gender and age (Vogel eta/., 1984). .... Fish Sold at Ile-Ife Main Market, South West Nigeria.

  14. Predicting Pilot Retention

    Science.gov (United States)

    2012-06-15

    forever… Gig ‘Em! Dale W. Stanley III vii Table of Contents Page Acknowledgments...over the last 20 years. Airbus predicted that these trends would continue as emerging economies , especially in Asia, were creating a fast growing...US economy , pay differential and hiring by the major airlines contributed most to the decision to separate from the Air Force (Fullerton, 2003: 354

  15. Predicting ideological prejudice

    NARCIS (Netherlands)

    Brandt, M.J.

    2018-01-01

    A major shortcoming of current models of ideological prejudice is that although they can anticipate the direction of the association between participants’ ideology and their prejudice against a range of target groups, they cannot predict the size of this association. I developed and tested models

  16. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  17. Urban pluvial flood prediction

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer

    2016-01-01

    Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events – especially in the future climate – it is valuable to be able to simulate these events numerically both...... historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper radar data observations with different spatial and temporal resolution, radar nowcasts of 0–2 h lead time, and numerical weather models with lead times up to 24 h are used as inputs...... to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on a small town Lystrup in Denmark, which has been flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps...

  18. Predicting Bankruptcy in Pakistan

    Directory of Open Access Journals (Sweden)

    Abdul RASHID

    2011-09-01

    Full Text Available This paper aims to identify the financial ratios that are most significant in bankruptcy prediction for the non-financial sector of Pakistan based on a sample of companies which became bankrupt over the time period 1996-2006. Twenty four financial ratios covering four important financial attributes, namely profitability, liquidity, leverage, and turnover ratios, were examined for a five-year period prior bankruptcy. The discriminant analysis produced a parsimonious model of three variables viz. sales to total assets, EBIT to current liabilities, and cash flow ratio. Our estimates provide evidence that the firms having Z-value below zero fall into the “bankrupt” whereas the firms with Z-value above zero fall into the “non-bankrupt” category. The model achieved 76.9% prediction accuracy when it is applied to forecast bankruptcies on the underlying sample.

  19. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Jørgensen, Claus Bjørn; Suetens, Sigrid; Tyran, Jean-Robert

    numbers based on recent drawings. While most players pick the same set of numbers week after week without regards of numbers drawn or anything else, we find that those who do change, act on average in the way predicted by the law of small numbers as formalized in recent behavioral theory. In particular......We investigate the “law of small numbers” using a unique panel data set on lotto gambling. Because we can track individual players over time, we can measure how they react to outcomes of recent lotto drawings. We can therefore test whether they behave as if they believe they can predict lotto......, on average they move away from numbers that have recently been drawn, as suggested by the “gambler’s fallacy”, and move toward numbers that are on streak, i.e. have been drawn several weeks in a row, consistent with the “hot hand fallacy”....

  20. Comparing Spatial Predictions

    KAUST Repository

    Hering, Amanda S.

    2011-11-01

    Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial predictions produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis is that of no difference, and a spatial loss differential is created based on the observed data, the two sets of predictions, and the loss function chosen by the researcher. The test assumes only isotropy and short-range spatial dependence of the loss differential but does allow it to be non-Gaussian, non-zero-mean, and spatially correlated. Constant and nonconstant spatial trends in the loss differential are treated in two separate cases. Monte Carlo simulations illustrate the size and power properties of this test, and an example based on daily average wind speeds in Oklahoma is used for illustration. Supplemental results are available online. © 2011 American Statistical Association and the American Society for Qualitys.

  1. Chaos detection and predictability

    CERN Document Server

    Gottwald, Georg; Laskar, Jacques

    2016-01-01

    Distinguishing chaoticity from regularity in deterministic dynamical systems and specifying the subspace of the phase space in which instabilities are expected to occur is of utmost importance in as disparate areas as astronomy, particle physics and climate dynamics.   To address these issues there exists a plethora of methods for chaos detection and predictability. The most commonly employed technique for investigating chaotic dynamics, i.e. the computation of Lyapunov exponents, however, may suffer a number of problems and drawbacks, for example when applied to noisy experimental data.   In the last two decades, several novel methods have been developed for the fast and reliable determination of the regular or chaotic nature of orbits, aimed at overcoming the shortcomings of more traditional techniques. This set of lecture notes and tutorial reviews serves as an introduction to and overview of modern chaos detection and predictability techniques for graduate students and non-specialists.   The book cover...

  2. Time-predictable architectures

    CERN Document Server

    Rochange, Christine; Uhrig , Sascha

    2014-01-01

    Building computers that can be used to design embedded real-time systems is the subject of this title. Real-time embedded software requires increasingly higher performances. The authors therefore consider processors that implement advanced mechanisms such as pipelining, out-of-order execution, branch prediction, cache memories, multi-threading, multicorearchitectures, etc. The authors of this book investigate the timepredictability of such schemes.

  3. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    CR cultural resource CRM cultural resource management CRPM Cultural Resource Predictive Modeling DoD Department of Defense ESTCP Environmental...resource management ( CRM ) legal obligations under NEPA and the NHPA, military installations need to demonstrate that CRM decisions are based on objective...maxim “one size does not fit all,” and demonstrate that DoD installations have many different CRM needs that can and should be met through a variety

  4. Predictive Game Theory

    Science.gov (United States)

    Wolpert, David H.

    2005-01-01

    Probability theory governs the outcome of a game; there is a distribution over mixed strat.'s, not a single "equilibrium". To predict a single mixed strategy must use our loss function (external to the game's players. Provides a quantification of any strategy's rationality. Prove rationality falls as cost of computation rises (for players who have not previously interacted). All extends to games with varying numbers of players.

  5. Predicting appointment breaking.

    Science.gov (United States)

    Bean, A G; Talaga, J

    1995-01-01

    The goal of physician referral services is to schedule appointments, but if too many patients fail to show up, the value of the service will be compromised. The authors found that appointment breaking can be predicted by the number of days to the scheduled appointment, the doctor's specialty, and the patient's age and gender. They also offer specific suggestions for modifying the marketing mix to reduce the incidence of no-shows.

  6. Adjusting estimative prediction limits

    OpenAIRE

    Masao Ueki; Kaoru Fueda

    2007-01-01

    This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.

  7. Space Weather Prediction

    Science.gov (United States)

    2014-10-31

    prominence eruptions and the ensuing coronal mass ejections. The ProMag is a spectro - polarimeter, consisting of a dual-beam polarization modulation unit...feeding a visible camera and an infrared camera. The instrument is designed to measure magnetic fields in solar prominences by simultaneous spectro ...as a result of coronal hole regions, we expect to improve UV predictions by incorporating an estimate of the Earth-side coronal hole regions. 5

  8. Instrument uncertainty predictions

    International Nuclear Information System (INIS)

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  9. Predictive Systems Toxicology

    KAUST Repository

    Kiani, Narsis A.; Shang, Ming-Mei; Zenil, Hector; Tegner, Jesper

    2018-01-01

    In this review we address to what extent computational techniques can augment our ability to predict toxicity. The first section provides a brief history of empirical observations on toxicity dating back to the dawn of Sumerian civilization. Interestingly, the concept of dose emerged very early on, leading up to the modern emphasis on kinetic properties, which in turn encodes the insight that toxicity is not solely a property of a compound but instead depends on the interaction with the host organism. The next logical step is the current conception of evaluating drugs from a personalized medicine point-of-view. We review recent work on integrating what could be referred to as classical pharmacokinetic analysis with emerging systems biology approaches incorporating multiple omics data. These systems approaches employ advanced statistical analytical data processing complemented with machine learning techniques and use both pharmacokinetic and omics data. We find that such integrated approaches not only provide improved predictions of toxicity but also enable mechanistic interpretations of the molecular mechanisms underpinning toxicity and drug resistance. We conclude the chapter by discussing some of the main challenges, such as how to balance the inherent tension between the predictive capacity of models, which in practice amounts to constraining the number of features in the models versus allowing for rich mechanistic interpretability, i.e. equipping models with numerous molecular features. This challenge also requires patient-specific predictions on toxicity, which in turn requires proper stratification of patients as regards how they respond, with or without adverse toxic effects. In summary, the transformation of the ancient concept of dose is currently successfully operationalized using rich integrative data encoded in patient-specific models.

  10. Predictive systems ecology

    OpenAIRE

    Evans, Matthew R.; Bithell, Mike; Cornell, Stephen J.; Dall, Sasha R. X.; D?az, Sandra; Emmott, Stephen; Ernande, Bruno; Grimm, Volker; Hodgson, David J.; Lewis, Simon L.; Mace, Georgina M.; Morecroft, Michael; Moustakas, Aristides; Murphy, Eugene; Newbold, Tim

    2013-01-01

    Human societies, and their well-being, depend to a significant extent on the state of the ecosystems that surround them. These ecosystems are changing rapidly usually in response to anthropogenic changes in the environment. To determine the likely impact of environmental change on ecosystems and the best ways to manage them, it would be desirable to be able to predict their future states. We present a proposal to develop the paradigm of ...

  11. UXO Burial Prediction Fidelity

    Science.gov (United States)

    2017-07-01

    models to capture detailed projectile dynamics during the early phases of water entry are wasted with regard to sediment -penetration depth prediction...ordnance (UXO) migrates and becomes exposed over time in response to water and sediment motion.  Such models need initial sediment penetration estimates...munition’s initial penetration depth into the sediment ,  the velocity of water at the water - sediment boundary (i.e., the bottom water velocity

  12. Predictive Systems Toxicology

    KAUST Repository

    Kiani, Narsis A.

    2018-01-15

    In this review we address to what extent computational techniques can augment our ability to predict toxicity. The first section provides a brief history of empirical observations on toxicity dating back to the dawn of Sumerian civilization. Interestingly, the concept of dose emerged very early on, leading up to the modern emphasis on kinetic properties, which in turn encodes the insight that toxicity is not solely a property of a compound but instead depends on the interaction with the host organism. The next logical step is the current conception of evaluating drugs from a personalized medicine point-of-view. We review recent work on integrating what could be referred to as classical pharmacokinetic analysis with emerging systems biology approaches incorporating multiple omics data. These systems approaches employ advanced statistical analytical data processing complemented with machine learning techniques and use both pharmacokinetic and omics data. We find that such integrated approaches not only provide improved predictions of toxicity but also enable mechanistic interpretations of the molecular mechanisms underpinning toxicity and drug resistance. We conclude the chapter by discussing some of the main challenges, such as how to balance the inherent tension between the predictive capacity of models, which in practice amounts to constraining the number of features in the models versus allowing for rich mechanistic interpretability, i.e. equipping models with numerous molecular features. This challenge also requires patient-specific predictions on toxicity, which in turn requires proper stratification of patients as regards how they respond, with or without adverse toxic effects. In summary, the transformation of the ancient concept of dose is currently successfully operationalized using rich integrative data encoded in patient-specific models.

  13. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  14. Predicting Human Cooperation.

    Directory of Open Access Journals (Sweden)

    John J Nay

    Full Text Available The Prisoner's Dilemma has been a subject of extensive research due to its importance in understanding the ever-present tension between individual self-interest and social benefit. A strictly dominant strategy in a Prisoner's Dilemma (defection, when played by both players, is mutually harmful. Repetition of the Prisoner's Dilemma can give rise to cooperation as an equilibrium, but defection is as well, and this ambiguity is difficult to resolve. The numerous behavioral experiments investigating the Prisoner's Dilemma highlight that players often cooperate, but the level of cooperation varies significantly with the specifics of the experimental predicament. We present the first computational model of human behavior in repeated Prisoner's Dilemma games that unifies the diversity of experimental observations in a systematic and quantitatively reliable manner. Our model relies on data we integrated from many experiments, comprising 168,386 individual decisions. The model is composed of two pieces: the first predicts the first-period action using solely the structural game parameters, while the second predicts dynamic actions using both game parameters and history of play. Our model is successful not merely at fitting the data, but in predicting behavior at multiple scales in experimental designs not used for calibration, using only information about the game structure. We demonstrate the power of our approach through a simulation analysis revealing how to best promote human cooperation.

  15. Predicting big bang deuterium

    Energy Technology Data Exchange (ETDEWEB)

    Hata, N.; Scherrer, R.J.; Steigman, G.; Thomas, D.; Walker, T.P. [Department of Physics, Ohio State University, Columbus, Ohio 43210 (United States)

    1996-02-01

    We present new upper and lower bounds to the primordial abundances of deuterium and {sup 3}He based on observational data from the solar system and the interstellar medium. Independent of any model for the primordial production of the elements we find (at the 95{percent} C.L.): 1.5{times}10{sup {minus}5}{le}(D/H){sub {ital P}}{le}10.0{times}10{sup {minus}5} and ({sup 3}He/H){sub {ital P}}{le}2.6{times}10{sup {minus}5}. When combined with the predictions of standard big bang nucleosynthesis, these constraints lead to a 95{percent} C.L. bound on the primordial abundance deuterium: (D/H){sub best}=(3.5{sup +2.7}{sub {minus}1.8}){times}10{sup {minus}5}. Measurements of deuterium absorption in the spectra of high-redshift QSOs will directly test this prediction. The implications of this prediction for the primordial abundances of {sup 4}He and {sup 7}Li are discussed, as well as those for the universal density of baryons. {copyright} {ital 1996 The American Astronomical Society.}

  16. Water exchange of Oeregrundsgrepen. A baroclinic 3D-model study

    International Nuclear Information System (INIS)

    Engqvist, A.; Andrejev, O.

    1999-04-01

    Hypothetically transport of radionuclides from the SFR repository for low and medium active wastes could be mediated by natural water circulation within receiving coastal basins. In this context a basic equation, free surface 3D-model has been used to compute the water exchange of the Oeregrundsgrepen bay-like area for a representative full year cycle. This has been achieved in two steps in order to provide coherent densimetric and sea level elevation boundary data relative to the adjacent Baltic coastal water. Weather data from 1992 were chosen. The focus is placed entirely on water exchange aspects with no consideration of what the water parcels may contain. Earlier model and measurement programs have also been reviewed. The first phase consisted of running a 3D-model encompassing the entire Baltic Sea. This model resolves the Baltic horizontally in five by five nautical miles (5x5). This model was driven by gridded (approx. 20x20) synoptic weather data with geostrophic wind and the varying density and sea level elevation on the Kattegat border. Freshwater discharge from the major rivers along the Baltic coastline was also taken into account. Initial data prior to December 1991 have been assessed from the available, but relatively scarce, salinity and temperature profile measurements in the Baltic. The time step was 2 hours. The relevant boundary data in the vicinity of the Oeregrundsgrepen area were saved after one full-year cycle of simulation. The second phase consisted of running a local model over the Oeregrundsgrepen with a higher horizontal grid resolution consisting of a 0.1x0.1 grid. This model was driven by the same weather data, combined with the saved densimetric and sea level elevation boundary data that were produced by the Baltic model with the coarser grid. This procedure applies both for the wide northern and the narrow southern interface. The transference of boundary data necessitated development of an appropriate interpolation scheme. This model has also been run for a full-year cycle allowing one month (December 1991) of spin-up time. The time step has been varied between 3 and 6 minutes. The retention time of the Oeregrundsgrepen was found to vary between 12.1 days (surface) and 25.8 days (bottom) as a yearly average. Special regard has been placed on estimating the ventilation of a particular subarea where a biological model study is currently being performed. This subarea is located in the waters above the SFR-depository embedded within the Oeregrundsgrepen model area. The exchange intensity, expressed as a yearly average transit retention time, spanned from 0.5 days (surface) to 1.2 days (bottom) with regard to the depth strata that the model resolves. The bulk volume average for all strata was 0.77 days with a standard deviation of 0.22 days equally for both intra-monthly and inter-monthly variations. The corresponding average total volume flow across the boundary was 2.1x10 3 m 3 /s

  17. Vertical propagation of baroclinic Kelvin waves along the west coast of India

    Digital Repository Service at National Institute of Oceanography (India)

    Nethery, D; Shankar, D

    ; monsoon current; equatorial oceanography; remote forcing; modelling; monsoons; oceanography. J. Earth Syst. Sci. 116, No. 4, August 2007, pp. 331?339 ? Printed in India. 331 332 D Nethery and D Shankar Figure 1. The stability or buoyancy frequency squared... and rho represents (dimen- sionless) density anomaly. The vertical modes psin are the eigenfunctions of the equation parenleftbigg (psi n)z N2b parenrightbigg z = - 1c2 n psin, (3) subject to the boundary conditions (psin)z =0 at z = -D and z = 0, where...

  18. Water exchange of Oeregrundsgrepen. A baroclinic 3D-model study

    Energy Technology Data Exchange (ETDEWEB)

    Engqvist, A. [A and l Engqvist Konsult HB, Vaxholm (Sweden); Andrejev, O. [Finnish Inst. of Marine Research, Helsinki (Finland)

    1999-04-01

    Hypothetically transport of radionuclides from the SFR repository for low and medium active wastes could be mediated by natural water circulation within receiving coastal basins. In this context a basic equation, free surface 3D-model has been used to compute the water exchange of the Oeregrundsgrepen bay-like area for a representative full year cycle. This has been achieved in two steps in order to provide coherent densimetric and sea level elevation boundary data relative to the adjacent Baltic coastal water. Weather data from 1992 were chosen. The focus is placed entirely on water exchange aspects with no consideration of what the water parcels may contain. Earlier model and measurement programs have also been reviewed. The first phase consisted of running a 3D-model encompassing the entire Baltic Sea. This model resolves the Baltic horizontally in five by five nautical miles (5x5). This model was driven by gridded (approx. 20x20) synoptic weather data with geostrophic wind and the varying density and sea level elevation on the Kattegat border. Freshwater discharge from the major rivers along the Baltic coastline was also taken into account. Initial data prior to December 1991 have been assessed from the available, but relatively scarce, salinity and temperature profile measurements in the Baltic. The time step was 2 hours. The relevant boundary data in the vicinity of the Oeregrundsgrepen area were saved after one full-year cycle of simulation. The second phase consisted of running a local model over the Oeregrundsgrepen with a higher horizontal grid resolution consisting of a 0.1x0.1 grid. This model was driven by the same weather data, combined with the saved densimetric and sea level elevation boundary data that were produced by the Baltic model with the coarser grid. This procedure applies both for the wide northern and the narrow southern interface. The transference of boundary data necessitated development of an appropriate interpolation scheme. This model has also been run for a full-year cycle allowing one month (December 1991) of spin-up time. The time step has been varied between 3 and 6 minutes. The retention time of the Oeregrundsgrepen was found to vary between 12.1 days (surface) and 25.8 days (bottom) as a yearly average. Special regard has been placed on estimating the ventilation of a particular subarea where a biological model study is currently being performed. This subarea is located in the waters above the SFR-depository embedded within the Oeregrundsgrepen model area. The exchange intensity, expressed as a yearly average transit retention time, spanned from 0.5 days (surface) to 1.2 days (bottom) with regard to the depth strata that the model resolves. The bulk volume average for all strata was 0.77 days with a standard deviation of 0.22 days equally for both intra-monthly and inter-monthly variations. The corresponding average total volume flow across the boundary was 2.1x10{sup 3} m{sup 3}/s.

  19. On Long Baroclinic Rossby Waves in the Tropical North Atlantic Observed From Profiling Floats

    Science.gov (United States)

    2007-05-16

    15b and 15c). Reclosing of vortex isolines while forming a new corotating eddy pair typically indicates excitation of periodical auto-oscillations in...important dynamical effect as reclosing of vortex isolines between corotating eddies, which are components of the semiannual standing Rossby wave

  20. Future changes in extratropical storm tracks and baroclinicity under climate change

    NARCIS (Netherlands)

    Lehmann, Jascha; Coumou, Dim; Frieler, Katja; Eliseev, Alexey V.; Levermann, Anders

    2014-01-01

    The weather in Eurasia, Australia, and North and South America is largely controlled by the strength and position of extratropical storm tracks. Future climate change will likely affect these storm tracks and the associated transport of energy, momentum, and water vapour. Many recent studies have

  1. Disruption prediction at JET

    International Nuclear Information System (INIS)

    Milani, F.

    1998-12-01

    The sudden loss of the plasma magnetic confinement, known as disruption, is one of the major issue in a nuclear fusion machine as JET (Joint European Torus). Disruptions pose very serious problems to the safety of the machine. The energy stored in the plasma is released to the machine structure in few milliseconds resulting in forces that at JET reach several Mega Newtons. The problem is even more severe in the nuclear fusion power station where the forces are in the order of one hundred Mega Newtons. The events that occur during a disruption are still not well understood even if some mechanisms that can lead to a disruption have been identified and can be used to predict them. Unfortunately it is always a combination of these events that generates a disruption and therefore it is not possible to use simple algorithms to predict it. This thesis analyses the possibility of using neural network algorithms to predict plasma disruptions in real time. This involves the determination of plasma parameters every few milliseconds. A plasma boundary reconstruction algorithm, XLOC, has been developed in collaboration with Dr. D. O'Brien and Dr. J. Ellis capable of determining the plasma wall/distance every 2 milliseconds. The XLOC output has been used to develop a multilayer perceptron network to determine plasma parameters as l i and q ψ with which a machine operational space has been experimentally defined. If the limits of this operational space are breached the disruption probability increases considerably. Another approach for prediction disruptions is to use neural network classification methods to define the JET operational space. Two methods have been studied. The first method uses a multilayer perceptron network with softmax activation function for the output layer. This method can be used for classifying the input patterns in various classes. In this case the plasma input patterns have been divided between disrupting and safe patterns, giving the possibility of

  2. Genomic Prediction in Barley

    DEFF Research Database (Denmark)

    Edriss, Vahid; Cericola, Fabio; Jensen, Jens D

    2015-01-01

    to next generation. The main goal of this study was to see the potential of using genomic prediction in a commercial Barley breeding program. The data used in this study was from Nordic Seed company which is located in Denmark. Around 350 advanced lines were genotyped with 9K Barely chip from Illumina....... Traits used in this study were grain yield, plant height and heading date. Heading date is number days it takes after 1st June for plant to head. Heritabilities were 0.33, 0.44 and 0.48 for yield, height and heading, respectively for the average of nine plots. The GBLUP model was used for genomic...

  3. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Suetens, Sigrid; Galbo-Jørgensen, Claus B.; Tyran, Jean-Robert Karl

    2016-01-01

    We investigate the ‘law of small numbers’ using a data set on lotto gambling that allows us to measure players’ reactions to draws. While most players pick the same set of numbers week after week, we find that those who do change react on average as predicted by the law of small numbers...... as formalized in recent behavioral theory. In particular, players tend to bet less on numbers that have been drawn in the preceding week, as suggested by the ‘gambler’s fallacy’, and bet more on a number if it was frequently drawn in the recent past, consistent with the ‘hot-hand fallacy’....

  4. Predictable return distributions

    DEFF Research Database (Denmark)

    Pedersen, Thomas Quistgaard

    trace out the entire distribution. A univariate quantile regression model is used to examine stock and bond return distributions individually, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that certain parts of the return distributions......-of-sample analyses show that the relative accuracy of the state variables in predicting future returns varies across the distribution. A portfolio study shows that an investor with power utility can obtain economic gains by applying the empirical return distribution in portfolio decisions instead of imposing...

  5. Predicting Ground Illuminance

    Science.gov (United States)

    Lesniak, Michael V.; Tregoning, Brett D.; Hitchens, Alexandra E.

    2015-01-01

    Our Sun outputs 3.85 x 1026 W of radiation, of which roughly 37% is in the visible band. It is directly responsible for nearly all natural illuminance experienced on Earth's surface, either in the form of direct/refracted sunlight or in reflected light bouncing off the surfaces and/or atmospheres of our Moon and the visible planets. Ground illuminance, defined as the amount of visible light intercepting a unit area of surface (from all incident angles), varies over 7 orders of magnitude from day to night. It is highly dependent on well-modeled factors such as the relative positions of the Sun, Earth, and Moon. It is also dependent on less predictable factors such as local atmospheric conditions and weather.Several models have been proposed to predict ground illuminance, including Brown (1952) and Shapiro (1982, 1987). The Brown model is a set of empirical data collected from observation points around the world that has been reduced to a smooth fit of illuminance against a single variable, solar altitude. It provides limited applicability to the Moon and for cloudy conditions via multiplicative reduction factors. The Shapiro model is a theoretical model that treats the atmosphere as a three layer system of light reflectance and transmittance. It has different sets of reflectance and transmittance coefficients for various cloud types.In this paper we compare the models' predictions to ground illuminance data from an observing run at the White Sands missile range (data was obtained from the United Kingdom's Meteorology Office). Continuous illuminance readings were recorded under various cloud conditions, during both daytime and nighttime hours. We find that under clear skies, the Shapiro model tends to better fit the observations during daytime hours with typical discrepancies under 10%. Under cloudy skies, both models tend to poorly predict ground illuminance. However, the Shapiro model, with typical average daytime discrepancies of 25% or less in many cases

  6. Predicting sports betting outcomes

    OpenAIRE

    Flis, Borut

    2014-01-01

    We wish to build a model, which could predict the outcome of basketball games. The goal was to achieve an sufficient enough accuracy to make a profit in sports betting. One learning example is a game in the NBA regular season. Every example has multiple features, which describe the opposing teams. We tried many methods, which return the probability of the home team winning and the probability of the away team winning. These probabilities are used for risk analysis. We used the best model in h...

  7. Predicting chaotic time series

    International Nuclear Information System (INIS)

    Farmer, J.D.; Sidorowich, J.J.

    1987-01-01

    We present a forecasting technique for chaotic data. After embedding a time series in a state space using delay coordinates, we ''learn'' the induced nonlinear mapping using local approximation. This allows us to make short-term predictions of the future behavior of a time series, using information based only on past values. We present an error estimate for this technique, and demonstrate its effectiveness by applying it to several examples, including data from the Mackey-Glass delay differential equation, Rayleigh-Benard convection, and Taylor-Couette flow

  8. Lattice of quantum predictions

    Science.gov (United States)

    Drieschner, Michael

    1993-10-01

    What is the structure of reality? Physics is supposed to answer this question, but a purely empiristic view is not sufficient to explain its ability to do so. Quantum mechanics has forced us to think more deeply about what a physical theory is. There are preconditions every physical theory must fulfill. It has to contain, e.g., rules for empirically testable predictions. Those preconditions give physics a structure that is “a priori” in the Kantian sense. An example is given how the lattice structure of quantum mechanics can be understood along these lines.

  9. Foundations of predictive analytics

    CERN Document Server

    Wu, James

    2012-01-01

    Drawing on the authors' two decades of experience in applied modeling and data mining, Foundations of Predictive Analytics presents the fundamental background required for analyzing data and building models for many practical applications, such as consumer behavior modeling, risk and marketing analytics, and other areas. It also discusses a variety of practical topics that are frequently missing from similar texts. The book begins with the statistical and linear algebra/matrix foundation of modeling methods, from distributions to cumulant and copula functions to Cornish--Fisher expansion and o

  10. Prediction of regulatory elements

    DEFF Research Database (Denmark)

    Sandelin, Albin

    2008-01-01

    Finding the regulatory mechanisms responsible for gene expression remains one of the most important challenges for biomedical research. A major focus in cellular biology is to find functional transcription factor binding sites (TFBS) responsible for the regulation of a downstream gene. As wet......-lab methods are time consuming and expensive, it is not realistic to identify TFBS for all uncharacterized genes in the genome by purely experimental means. Computational methods aimed at predicting potential regulatory regions can increase the efficiency of wet-lab experiments significantly. Here, methods...

  11. Age and Stress Prediction

    Science.gov (United States)

    2000-01-01

    Genoa is a software product that predicts progressive aging and failure in a variety of materials. It is the result of a SBIR contract between the Glenn Research Center and Alpha Star Corporation. Genoa allows designers to determine if the materials they plan on applying to a structure are up to the task or if alternate materials should be considered. Genoa's two feature applications are its progressive failure simulations and its test verification. It allows for a reduction in inspection frequency, rapid design solutions, and manufacturing with low cost materials. It will benefit the aerospace, airline, and automotive industries, with future applications for other uses.

  12. Prediction of Biomolecular Complexes

    KAUST Repository

    Vangone, Anna

    2017-04-12

    Almost all processes in living organisms occur through specific interactions between biomolecules. Any dysfunction of those interactions can lead to pathological events. Understanding such interactions is therefore a crucial step in the investigation of biological systems and a starting point for drug design. In recent years, experimental studies have been devoted to unravel the principles of biomolecular interactions; however, due to experimental difficulties in solving the three-dimensional (3D) structure of biomolecular complexes, the number of available, high-resolution experimental 3D structures does not fulfill the current needs. Therefore, complementary computational approaches to model such interactions are necessary to assist experimentalists since a full understanding of how biomolecules interact (and consequently how they perform their function) only comes from 3D structures which provide crucial atomic details about binding and recognition processes. In this chapter we review approaches to predict biomolecular complexesBiomolecular complexes, introducing the concept of molecular dockingDocking, a technique which uses a combination of geometric, steric and energetics considerations to predict the 3D structure of a biological complex starting from the individual structures of its constituent parts. We provide a mini-guide about docking concepts, its potential and challenges, along with post-docking analysis and a list of related software.

  13. Nuclear criticality predictability

    International Nuclear Information System (INIS)

    Briggs, J.B.

    1999-01-01

    As a result of lots of efforts, a large portion of the tedious and redundant research and processing of critical experiment data has been eliminated. The necessary step in criticality safety analyses of validating computer codes with benchmark critical data is greatly streamlined, and valuable criticality safety experimental data is preserved. Criticality safety personnel in 31 different countries are now using the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments'. Much has been accomplished by the work of the ICSBEP. However, evaluation and documentation represents only one element of a successful Nuclear Criticality Safety Predictability Program and this element only exists as a separate entity, because this work was not completed in conjunction with the experimentation process. I believe; however, that the work of the ICSBEP has also served to unify the other elements of nuclear criticality predictability. All elements are interrelated, but for a time it seemed that communications between these elements was not adequate. The ICSBEP has highlighted gaps in data, has retrieved lost data, has helped to identify errors in cross section processing codes, and has helped bring the international criticality safety community together in a common cause as true friends and colleagues. It has been a privilege to associate with those who work so diligently to make the project a success. (J.P.N.)

  14. Ratchetting strain prediction

    International Nuclear Information System (INIS)

    Noban, Mohammad; Jahed, Hamid

    2007-01-01

    A time-efficient method for predicting ratchetting strain is proposed. The ratchetting strain at any cycle is determined by finding the ratchetting rate at only a few cycles. This determination is done by first defining the trajectory of the origin of stress in the deviatoric stress space and then incorporating this moving origin into a cyclic plasticity model. It is shown that at the beginning of the loading, the starting point of this trajectory coincides with the initial stress origin and approaches the mean stress, displaying a power-law relationship with the number of loading cycles. The method of obtaining this trajectory from a standard uniaxial asymmetric cyclic loading is presented. Ratchetting rates are calculated with the help of this trajectory and through the use of a constitutive cyclic plasticity model which incorporates deviatoric stresses and back stresses that are measured with respect to this moving frame. The proposed model is used to predict the ratchetting strain of two types of steels under single- and multi-step loadings. Results obtained agree well with the available experimental measurements

  15. Predicting space climate change

    Science.gov (United States)

    Balcerak, Ernie

    2011-10-01

    Galactic cosmic rays and solar energetic particles can be hazardous to humans in space, damage spacecraft and satellites, pose threats to aircraft electronics, and expose aircrew and passengers to radiation. A new study shows that these threats are likely to increase in coming years as the Sun approaches the end of the period of high solar activity known as “grand solar maximum,” which has persisted through the past several decades. High solar activity can help protect the Earth by repelling incoming galactic cosmic rays. Understanding the past record can help scientists predict future conditions. Barnard et al. analyzed a 9300-year record of galactic cosmic ray and solar activity based on cosmogenic isotopes in ice cores as well as on neutron monitor data. They used this to predict future variations in galactic cosmic ray flux, near-Earth interplanetary magnetic field, sunspot number, and probability of large solar energetic particle events. The researchers found that the risk of space weather radiation events will likely increase noticeably over the next century compared with recent decades and that lower solar activity will lead to increased galactic cosmic ray levels. (Geophysical Research Letters, doi:10.1029/2011GL048489, 2011)

  16. Prediction of Biomolecular Complexes

    KAUST Repository

    Vangone, Anna; Oliva, Romina; Cavallo, Luigi; Bonvin, Alexandre M. J. J.

    2017-01-01

    Almost all processes in living organisms occur through specific interactions between biomolecules. Any dysfunction of those interactions can lead to pathological events. Understanding such interactions is therefore a crucial step in the investigation of biological systems and a starting point for drug design. In recent years, experimental studies have been devoted to unravel the principles of biomolecular interactions; however, due to experimental difficulties in solving the three-dimensional (3D) structure of biomolecular complexes, the number of available, high-resolution experimental 3D structures does not fulfill the current needs. Therefore, complementary computational approaches to model such interactions are necessary to assist experimentalists since a full understanding of how biomolecules interact (and consequently how they perform their function) only comes from 3D structures which provide crucial atomic details about binding and recognition processes. In this chapter we review approaches to predict biomolecular complexesBiomolecular complexes, introducing the concept of molecular dockingDocking, a technique which uses a combination of geometric, steric and energetics considerations to predict the 3D structure of a biological complex starting from the individual structures of its constituent parts. We provide a mini-guide about docking concepts, its potential and challenges, along with post-docking analysis and a list of related software.

  17. Energy Predictions 2011

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-10-15

    Even as the recession begins to subside, the energy sector is still likely to experience challenging conditions as we enter 2011. It should be remembered how very important a role energy plays in driving the global economy. Serving as a simple yet global and unified measure of economic recovery, it is oil's price range and the strength and sustainability of the recovery which will impact the ways in which all forms of energy are produced and consumed. The report aims for a closer insight into these predictions: What will happen with M and A (Mergers and Acquisitions) in the energy industry?; What are the prospects for renewables?; Will the water-energy nexus grow in importance?; How will technological leaps and bounds affect E and P (exploration and production) operations?; What about electric cars? This is the second year Deloitte's Global Energy and Resources Group has published its predictions for the year ahead. The report is based on in-depth interviews with clients, industry analysts, and senior energy practitioners from Deloitte member firms around the world.

  18. Energy Predictions 2011

    International Nuclear Information System (INIS)

    2010-10-01

    Even as the recession begins to subside, the energy sector is still likely to experience challenging conditions as we enter 2011. It should be remembered how very important a role energy plays in driving the global economy. Serving as a simple yet global and unified measure of economic recovery, it is oil's price range and the strength and sustainability of the recovery which will impact the ways in which all forms of energy are produced and consumed. The report aims for a closer insight into these predictions: What will happen with M and A (Mergers and Acquisitions) in the energy industry?; What are the prospects for renewables?; Will the water-energy nexus grow in importance?; How will technological leaps and bounds affect E and P (exploration and production) operations?; What about electric cars? This is the second year Deloitte's Global Energy and Resources Group has published its predictions for the year ahead. The report is based on in-depth interviews with clients, industry analysts, and senior energy practitioners from Deloitte member firms around the world.

  19. Predicting Alloreactivity in Transplantation

    Directory of Open Access Journals (Sweden)

    Kirsten Geneugelijk

    2014-01-01

    Full Text Available Human leukocyte Antigen (HLA mismatching leads to severe complications after solid-organ transplantation and hematopoietic stem-cell transplantation. The alloreactive responses underlying the posttransplantation complications include both direct recognition of allogeneic HLA by HLA-specific alloantibodies and T cells and indirect T-cell recognition. However, the immunogenicity of HLA mismatches is highly variable; some HLA mismatches lead to severe clinical B-cell- and T-cell-mediated alloreactivity, whereas others are well tolerated. Definition of the permissibility of HLA mismatches prior to transplantation allows selection of donor-recipient combinations that will have a reduced chance to develop deleterious host-versus-graft responses after solid-organ transplantation and graft-versus-host responses after hematopoietic stem-cell transplantation. Therefore, several methods have been developed to predict permissible HLA-mismatch combinations. In this review we aim to give a comprehensive overview about the current knowledge regarding HLA-directed alloreactivity and several developed in vitro and in silico tools that aim to predict direct and indirect alloreactivity.

  20. Generalized Predictive Control and Neural Generalized Predictive Control

    Directory of Open Access Journals (Sweden)

    Sadhana CHIDRAWAR

    2008-12-01

    Full Text Available As Model Predictive Control (MPC relies on the predictive Control using a multilayer feed forward network as the plants linear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. This paper presents a detailed derivation of the Generalized Predictive Control and Neural Generalized Predictive Control with Newton-Raphson as minimization algorithm. Taking three separate systems, performances of the system has been tested. Simulation results show the effect of neural network on Generalized Predictive Control. The performance comparison of this three system configurations has been given in terms of ISE and IAE.

  1. Numerical prediction of rose growth

    NARCIS (Netherlands)

    Bernsen, E.; Bokhove, Onno; van der Sar, D.M.

    2006-01-01

    A new mathematical model is presented for the prediction of rose growth in a greenhouse. Given the measured ambient environmental conditions, the model consists of a local photosynthesis model, predicting the photosynthesis per unit leaf area, coupled to a global greenhouse model, which predicts the

  2. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  3. Protein docking prediction using predicted protein-protein interface

    Directory of Open Access Journals (Sweden)

    Li Bin

    2012-01-01

    Full Text Available Abstract Background Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. Results We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm, is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. Conclusion We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  4. Protein docking prediction using predicted protein-protein interface.

    Science.gov (United States)

    Li, Bin; Kihara, Daisuke

    2012-01-10

    Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.

  5. Epitope prediction methods

    DEFF Research Database (Denmark)

    Karosiene, Edita

    Analysis. The chapter provides detailed explanations on how to use different methods for T cell epitope discovery research, explaining how input should be given as well as how to interpret the output. In the last chapter, I present the results of a bioinformatics analysis of epitopes from the yellow fever...... peptide-MHC interactions. Furthermore, using yellow fever virus epitopes, we demonstrated the power of the %Rank score when compared with the binding affinity score of MHC prediction methods, suggesting that this score should be considered to be used for selecting potential T cell epitopes. In summary...... immune responses. Therefore, it is of great importance to be able to identify peptides that bind to MHC molecules, in order to understand the nature of immune responses and discover T cell epitopes useful for designing new vaccines and immunotherapies. MHC molecules in humans, referred to as human...

  6. Motor degradation prediction methods

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.

    1996-12-01

    Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor`s duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures.

  7. Filter replacement lifetime prediction

    Science.gov (United States)

    Hamann, Hendrik F.; Klein, Levente I.; Manzer, Dennis G.; Marianno, Fernando J.

    2017-10-25

    Methods and systems for predicting a filter lifetime include building a filter effectiveness history based on contaminant sensor information associated with a filter; determining a rate of filter consumption with a processor based on the filter effectiveness history; and determining a remaining filter lifetime based on the determined rate of filter consumption. Methods and systems for increasing filter economy include measuring contaminants in an internal and an external environment; determining a cost of a corrosion rate increase if unfiltered external air intake is increased for cooling; determining a cost of increased air pressure to filter external air; and if the cost of filtering external air exceeds the cost of the corrosion rate increase, increasing an intake of unfiltered external air.

  8. Neurological abnormalities predict disability

    DEFF Research Database (Denmark)

    Poggesi, Anna; Gouw, Alida; van der Flier, Wiesje

    2014-01-01

    To investigate the role of neurological abnormalities and magnetic resonance imaging (MRI) lesions in predicting global functional decline in a cohort of initially independent-living elderly subjects. The Leukoaraiosis And DISability (LADIS) Study, involving 11 European centres, was primarily aimed...... at evaluating age-related white matter changes (ARWMC) as an independent predictor of the transition to disability (according to Instrumental Activities of Daily Living scale) or death in independent elderly subjects that were followed up for 3 years. At baseline, a standardized neurological examination.......0 years, 45 % males), 327 (51.7 %) presented at the initial visit with ≥1 neurological abnormality and 242 (38 %) reached the main study outcome. Cox regression analyses, adjusting for MRI features and other determinants of functional decline, showed that the baseline presence of any neurological...

  9. Motor degradation prediction methods

    International Nuclear Information System (INIS)

    Arnold, J.R.; Kelly, J.F.; Delzingaro, M.J.

    1996-01-01

    Motor Operated Valve (MOV) squirrel cage AC motor rotors are susceptible to degradation under certain conditions. Premature failure can result due to high humidity/temperature environments, high running load conditions, extended periods at locked rotor conditions (i.e. > 15 seconds) or exceeding the motor's duty cycle by frequent starts or multiple valve stroking. Exposure to high heat and moisture due to packing leaks, pressure seal ring leakage or other causes can significantly accelerate the degradation. ComEd and Liberty Technologies have worked together to provide and validate a non-intrusive method using motor power diagnostics to evaluate MOV rotor condition and predict failure. These techniques have provided a quick, low radiation dose method to evaluate inaccessible motors, identify degradation and allow scheduled replacement of motors prior to catastrophic failures

  10. Predictability in community dynamics.

    Science.gov (United States)

    Blonder, Benjamin; Moulton, Derek E; Blois, Jessica; Enquist, Brian J; Graae, Bente J; Macias-Fauria, Marc; McGill, Brian; Nogué, Sandra; Ordonez, Alejandro; Sandel, Brody; Svenning, Jens-Christian

    2017-03-01

    The coupling between community composition and climate change spans a gradient from no lags to strong lags. The no-lag hypothesis is the foundation of many ecophysiological models, correlative species distribution modelling and climate reconstruction approaches. Simple lag hypotheses have become prominent in disequilibrium ecology, proposing that communities track climate change following a fixed function or with a time delay. However, more complex dynamics are possible and may lead to memory effects and alternate unstable states. We develop graphical and analytic methods for assessing these scenarios and show that these dynamics can appear in even simple models. The overall implications are that (1) complex community dynamics may be common and (2) detailed knowledge of past climate change and community states will often be necessary yet sometimes insufficient to make predictions of a community's future state. © 2017 John Wiley & Sons Ltd/CNRS.

  11. Neonatal heart rate prediction.

    Science.gov (United States)

    Abdel-Rahman, Yumna; Jeremic, Aleksander; Tan, Kenneth

    2009-01-01

    Technological advances have caused a decrease in the number of infant deaths. Pre-term infants now have a substantially increased chance of survival. One of the mechanisms that is vital to saving the lives of these infants is continuous monitoring and early diagnosis. With continuous monitoring huge amounts of data are collected with so much information embedded in them. By using statistical analysis this information can be extracted and used to aid diagnosis and to understand development. In this study we have a large dataset containing over 180 pre-term infants whose heart rates were recorded over the length of their stay in the Neonatal Intensive Care Unit (NICU). We test two types of models, empirical bayesian and autoregressive moving average. We then attempt to predict future values. The autoregressive moving average model showed better results but required more computation.

  12. Chloride ingress prediction

    DEFF Research Database (Denmark)

    Frederiksen, Jens Mejer; Geiker, Mette Rica

    2008-01-01

    Prediction of chloride ingress into concrete is an important part of durability design of reinforced concrete structures exposed to chloride containing environment. This paper presents experimentally based design parameters for Portland cement concretes with and without silica fume and fly ash...... in marine atmospheric and submersed South Scandinavian environment. The design parameters are based on sequential measurements of 86 chloride profiles taken over ten years from 13 different types of concrete. The design parameters provide the input for an analytical model for chloride profiles as function...... of depth and time, when both the surface chloride concentration and the diffusion coefficient are allowed to vary in time. The model is presented in a companion paper....

  13. Strontium 90 fallout prediction

    International Nuclear Information System (INIS)

    Sarmiento, J.L.; Gwinn, E.

    1986-01-01

    An empirical formula is developed for predicting monthly sea level strontium 90 fallout (F) in the northern hemisphere as a function of time (t), precipitation rate (P), latitude (phi), longitude (lambda), and the sea level concentration of stronium 90 in air (C): F(lambda, phi, t) = C(t, phi)[v /sub d/(phi) + v/sub w/(lambda, phi, t)], where v/sub w/(lambda, phi, t) = a(phi)[P(lambda, phi, t)/P/sub o/]/sup b//sup (//sup phi//sup )/ is the wet removal, v/sub d/(phi) is the dry removal and P 0 is 1 cm/month. The constants v/sub d/, a, and b are determined as functions of latitude by fitting land based observations. The concentration of 90 Sr in air is calculated as a function of the deseasonalized concentration at a reference latitude (C-bar/sub r//sub e//sub f/), the ratio of the observations at the latitude of interest to the reference latitude (R), and a function representing the seasonal trend in the air concentration (1 + g): C-bar(t, phi) = C/sub r//sub e//sub f/(t)R(phi)[1 + g(m, phi)]; m is the month. Zonal trends in C are shown to be relatively small. This formula can be used in conjuction with precipitation observations and/or estimates to predict fallout in the northern hemisphere for any month in the years 1954 to 1974. Error estimates are given; they do not include uncertainty due to errors in precipitation data

  14. Plume rise predictions

    International Nuclear Information System (INIS)

    Briggs, G.A.

    1976-01-01

    Anyone involved with diffusion calculations becomes well aware of the strong dependence of maximum ground concentrations on the effective stack height, h/sub e/. For most conditions chi/sub max/ is approximately proportional to h/sub e/ -2 , as has been recognized at least since 1936 (Bosanquet and Pearson). Making allowance for the gradual decrease in the ratio of vertical to lateral diffusion at increasing heights, the exponent is slightly larger, say chi/sub max/ approximately h/sub e/ - 2 . 3 . In inversion breakup fumigation, the exponent is somewhat smaller; very crudely, chi/sub max/ approximately h/sub e/ -1 . 5 . In any case, for an elevated emission the dependence of chi/sub max/ on h/sub e/ is substantial. It is postulated that a really clever ignorant theoretician can disguise his ignorance with dimensionless constants. For most sources the effective stack height is considerably larger than the actual source height, h/sub s/. For instance, for power plants with no downwash problems, h/sub e/ is more than twice h/sub s/ whenever the wind is less than 10 m/sec, which is most of the time. This is unfortunate for anyone who has to predict ground concentrations, for he is likely to have to calculate the plume rise, Δh. Especially when using h/sub e/ = h/sub s/ + Δh instead of h/sub s/ may reduce chi/sub max/ by a factor of anywhere from 4 to infinity. Factors to be considered in making plume rise predictions are discussed

  15. Predictive coarse-graining

    Energy Technology Data Exchange (ETDEWEB)

    Schöberl, Markus, E-mail: m.schoeberl@tum.de [Continuum Mechanics Group, Technical University of Munich, Boltzmannstraße 15, 85748 Garching (Germany); Zabaras, Nicholas [Institute for Advanced Study, Technical University of Munich, Lichtenbergstraße 2a, 85748 Garching (Germany); Department of Aerospace and Mechanical Engineering, University of Notre Dame, 365 Fitzpatrick Hall, Notre Dame, IN 46556 (United States); Koutsourelakis, Phaedon-Stelios [Continuum Mechanics Group, Technical University of Munich, Boltzmannstraße 15, 85748 Garching (Germany)

    2017-03-15

    We propose a data-driven, coarse-graining formulation in the context of equilibrium statistical mechanics. In contrast to existing techniques which are based on a fine-to-coarse map, we adopt the opposite strategy by prescribing a probabilistic coarse-to-fine map. This corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale (all-atom) data. From an information-theoretic perspective, the framework proposed provides an improvement upon the relative entropy method and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the coarse-graining process. Furthermore, it can be readily extended to a fully Bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. The latter can be used to produce not only point estimates of fine-scale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. Predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarse-graining. The issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarse-grained model. A flexible and parallelizable Monte Carlo – Expectation–Maximization (MC-EM) scheme is proposed for carrying out inference and learning tasks. A comparative assessment of the proposed methodology is presented for a lattice spin system and the SPC/E water model.

  16. REVIEW ON THE CURRENT STATE AND FUTURE DEVELOPMENT OF THE MACRO-SCALE PLANT ECOLOGICAL MODELS%宏观植物生态模型的研究现状与展望

    Institute of Scientific and Technical Information of China (English)

    苏宏新; 桑卫国

    2002-01-01

    概述了3种主要植物生态模型的发展现状:1)种群动态模型,主要模拟在一个生态系统中单个种的植物个体发芽、成长和死亡过程,及其种内竞争和种间相互作用,是研究开发最早的一类生态模型之一.该类模型主要应用于分析植物种群之间相互作用.2)演替模型,主要模拟植物种类(动物与此相伴)在整个生态系统发展过程的变化,包括植被类型的转变和相关的生物地球化学循环过程的改变.可用于研究生物群落对气候变化的响应.3)生态系统模型,是把生态系统当作一个功能整体来模拟的一类模型,主要有以下3类:(1)SVAT模型,主要模拟地表生态系统过程,以BATS、SiB、SiB2和LEAF为代表,多用于气候研究;(2)BGC模型,主要模拟3个关键循环:碳,水和营养物质循环.常用的BGC模型有:FOREST-BGC、BIOME-BGC、CENTURY、TEM、DOLY以及由它们衍生而来的整合模型组;(3)BG模型,模拟群落、生物群区中植物分布,比较具有代表性的 BGMs包括BIOME2和MAPSS,它们主要用于研究因气候变化而引起的生物分布的变迁.最后,结合我们的实际工作展望了生态模型在未来几年内的几个发展方向:1)与基础学科相结合,比如把物候学引入生态模型研究中来,以寻求新的支撑点;2)与现代非线性理论相结合,重新评价模型的假设基础;3)与现代科学技术相结合,利用3S技术和计算机技术为模型的发展提供更强大的技术支持;4)在研究方法上,从还原论转向整体论,尽可能地把生态系统当作一个功能整体来模拟研究.

  17. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  18. Earthquake prediction with electromagnetic phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Hayakawa, Masashi, E-mail: hayakawa@hi-seismo-em.jp [Hayakawa Institute of Seismo Electomagnetics, Co. Ltd., University of Electro-Communications (UEC) Incubation Center, 1-5-1 Chofugaoka, Chofu Tokyo, 182-8585 (Japan); Advanced Wireless & Communications Research Center, UEC, Chofu Tokyo (Japan); Earthquake Analysis Laboratory, Information Systems Inc., 4-8-15, Minami-aoyama, Minato-ku, Tokyo, 107-0062 (Japan); Fuji Security Systems. Co. Ltd., Iwato-cho 1, Shinjyuku-ku, Tokyo (Japan)

    2016-02-01

    Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQs prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.

  19. Predictive Maturity of Multi-Scale Simulation Models for Fuel Performance

    International Nuclear Information System (INIS)

    Atamturktur, Sez; Unal, Cetin; Hemez, Francois; Williams, Brian; Tome, Carlos

    2015-01-01

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy's resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed framework is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this

  20. Predictive Maturity of Multi-Scale Simulation Models for Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Atamturktur, Sez [Clemson Univ., SC (United States); Unal, Cetin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tome, Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-16

    The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed framework is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this

  1. Performance Prediction Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-25

    The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes, cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few

  2. Introduction: Long term prediction

    International Nuclear Information System (INIS)

    Beranger, G.

    2003-01-01

    Making a decision upon the right choice of a material appropriate to a given application should be based on taking into account several parameters as follows: cost, standards, regulations, safety, recycling, chemical properties, supplying, transformation, forming, assembly, mechanical and physical properties as well as the behaviour in practical conditions. Data taken from a private communication (J.H.Davidson) are reproduced presenting the life time range of materials from a couple of minutes to half a million hours corresponding to applications from missile technology up to high-temperature nuclear reactors or steam turbines. In the case of deep storage of nuclear waste the time required is completely different from these values since we have to ensure the integrity of the storage system for several thousand years. The vitrified nuclear wastes should be stored in metallic canisters made of iron and carbon steels, stainless steels, copper and copper alloys, nickel alloys or titanium alloys. Some of these materials are passivating metals, i.e. they develop a thin protective film, 2 or 3 nm thick - the so-called passive films. These films prevent general corrosion of the metal in a large range of chemical condition of the environment. In some specific condition, localized corrosion such as the phenomenon of pitting, occurs. Consequently, it is absolutely necessary to determine these chemical condition and their stability in time to understand the behavior of a given material. In other words the corrosion system is constituted by the complex material/surface/medium. For high level nuclear wastes the main features for resolving problem are concerned with: geological disposal; deep storage in clay; waste metallic canister; backfill mixture (clay-gypsum) or concrete; long term behavior; data needed for modelling and for predicting; choice of appropriate solution among several metallic candidates. The analysis of the complex material/surface/medium is of great importance

  3. Predictability of blocking

    International Nuclear Information System (INIS)

    Tosi, E.; Ruti, P.; Tibaldi, S.; D'Andrea, F.

    1994-01-01

    Tibaldi and Molteni (1990, hereafter referred to as TM) had previously investigated operational blocking predictability by the ECMWF model and the possible relationships between model systematic error and blocking in the winter season of the Northern Hemisphere, using seven years of ECMWF operational archives of analyses and day 1 to 10 forecasts. They showed that fewer blocking episodes than in the real atmosphere were generally simulated by the model, and that this deficiency increased with increasing forecast time. As a consequence of this, a major contribution to the systematic error in the winter season was shown to derive from the inability of the model to properly forecast blocking. In this study, the analysis performed in TM for the first seven winter seasons of the ECMWF operational model is extended to the subsequent five winters, during which model development, reflecting both resolution increases and parametrisation modifications, continued unabated. In addition the objective blocking index developed by TM has been applied to the observed data to study the natural low frequency variability of blocking. The ability to simulate blocking of some climate models has also been tested

  4. GABA predicts visual intelligence.

    Science.gov (United States)

    Cook, Emily; Hammett, Stephen T; Larsson, Jonas

    2016-10-06

    Early psychological researchers proposed a link between intelligence and low-level perceptual performance. It was recently suggested that this link is driven by individual variations in the ability to suppress irrelevant information, evidenced by the observation of strong correlations between perceptual surround suppression and cognitive performance. However, the neural mechanisms underlying such a link remain unclear. A candidate mechanism is neural inhibition by gamma-aminobutyric acid (GABA), but direct experimental support for GABA-mediated inhibition underlying suppression is inconsistent. Here we report evidence consistent with a global suppressive mechanism involving GABA underlying the link between sensory performance and intelligence. We measured visual cortical GABA concentration, visuo-spatial intelligence and visual surround suppression in a group of healthy adults. Levels of GABA were strongly predictive of both intelligence and surround suppression, with higher levels of intelligence associated with higher levels of GABA and stronger surround suppression. These results indicate that GABA-mediated neural inhibition may be a key factor determining cognitive performance and suggests a physiological mechanism linking surround suppression and intelligence. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  5. Predictability in cellular automata.

    Science.gov (United States)

    Agapie, Alexandru; Andreica, Anca; Chira, Camelia; Giuclea, Marius

    2014-01-01

    Modelled as finite homogeneous Markov chains, probabilistic cellular automata with local transition probabilities in (0, 1) always posses a stationary distribution. This result alone is not very helpful when it comes to predicting the final configuration; one needs also a formula connecting the probabilities in the stationary distribution to some intrinsic feature of the lattice configuration. Previous results on the asynchronous cellular automata have showed that such feature really exists. It is the number of zero-one borders within the automaton's binary configuration. An exponential formula in the number of zero-one borders has been proved for the 1-D, 2-D and 3-D asynchronous automata with neighborhood three, five and seven, respectively. We perform computer experiments on a synchronous cellular automaton to check whether the empirical distribution obeys also that theoretical formula. The numerical results indicate a perfect fit for neighbourhood three and five, which opens the way for a rigorous proof of the formula in this new, synchronous case.

  6. Predictive Manufacturing: A Classification Strategy to Predict Product Failures

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schiøler, Henrik; Kulahci, Murat

    2018-01-01

    manufacturing analytics model that employs a big data approach to predicting product failures; third, we illustrate the issue of high dimensionality, along with statistically redundant information; and, finally, our proposed method will be compared against the well-known classification methods (SVM, K......-nearest neighbor, artificial neural networks). The results from real data show that our predictive manufacturing analytics approach, using genetic algorithms and Voronoi tessellations, is capable of predicting product failure with reasonable accuracy. The potential application of this method contributes...... to accurately predicting product failures, which would enable manufacturers to reduce production costs without compromising product quality....

  7. House Price Prediction Using LSTM

    OpenAIRE

    Chen, Xiaochen; Wei, Lai; Xu, Jiaxin

    2017-01-01

    In this paper, we use the house price data ranging from January 2004 to October 2016 to predict the average house price of November and December in 2016 for each district in Beijing, Shanghai, Guangzhou and Shenzhen. We apply Autoregressive Integrated Moving Average model to generate the baseline while LSTM networks to build prediction model. These algorithms are compared in terms of Mean Squared Error. The result shows that the LSTM model has excellent properties with respect to predict time...

  8. Long Range Aircraft Trajectory Prediction

    OpenAIRE

    Magister, Tone

    2009-01-01

    The subject of the paper is the improvement of the aircraft future trajectory prediction accuracy for long-range airborne separation assurance. The strategic planning of safe aircraft flights and effective conflict avoidance tactics demand timely and accurate conflict detection based upon future four–dimensional airborne traffic situation prediction which is as accurate as each aircraft flight trajectory prediction. The improved kinematics model of aircraft relative flight considering flight ...

  9. Review of Nearshore Morphologic Prediction

    Science.gov (United States)

    Plant, N. G.; Dalyander, S.; Long, J.

    2014-12-01

    The evolution of the world's erodible coastlines will determine the balance between the benefits and costs associated with human and ecological utilization of shores, beaches, dunes, barrier islands, wetlands, and estuaries. So, we would like to predict coastal evolution to guide management and planning of human and ecological response to coastal changes. After decades of research investment in data collection, theoretical and statistical analysis, and model development we have a number of empirical, statistical, and deterministic models that can predict the evolution of the shoreline, beaches, dunes, and wetlands over time scales of hours to decades, and even predict the evolution of geologic strata over the course of millennia. Comparisons of predictions to data have demonstrated that these models can have meaningful predictive skill. But these comparisons also highlight the deficiencies in fundamental understanding, formulations, or data that are responsible for prediction errors and uncertainty. Here, we review a subset of predictive models of the nearshore to illustrate tradeoffs in complexity, predictive skill, and sensitivity to input data and parameterization errors. We identify where future improvement in prediction skill will result from improved theoretical understanding, and data collection, and model-data assimilation.

  10. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  11. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  12. Predictability and Prediction for an Experimental Cultural Market

    Science.gov (United States)

    Colbaugh, Richard; Glass, Kristin; Ormerod, Paul

    Individuals are often influenced by the behavior of others, for instance because they wish to obtain the benefits of coordinated actions or infer otherwise inaccessible information. In such situations this social influence decreases the ex ante predictability of the ensuing social dynamics. We claim that, interestingly, these same social forces can increase the extent to which the outcome of a social process can be predicted very early in the process. This paper explores this claim through a theoretical and empirical analysis of the experimental music market described and analyzed in [1]. We propose a very simple model for this music market, assess the predictability of market outcomes through formal analysis of the model, and use insights derived through this analysis to develop algorithms for predicting market share winners, and their ultimate market shares, in the very early stages of the market. The utility of these predictive algorithms is illustrated through analysis of the experimental music market data sets [2].

  13. Predicting epileptic seizures in advance.

    Directory of Open Access Journals (Sweden)

    Negin Moghim

    Full Text Available Epilepsy is the second most common neurological disorder, affecting 0.6-0.8% of the world's population. In this neurological disorder, abnormal activity of the brain causes seizures, the nature of which tend to be sudden. Antiepileptic Drugs (AEDs are used as long-term therapeutic solutions that control the condition. Of those treated with AEDs, 35% become resistant to medication. The unpredictable nature of seizures poses risks for the individual with epilepsy. It is clearly desirable to find more effective ways of preventing seizures for such patients. The automatic detection of oncoming seizures, before their actual onset, can facilitate timely intervention and hence minimize these risks. In addition, advance prediction of seizures can enrich our understanding of the epileptic brain. In this study, drawing on the body of work behind automatic seizure detection and prediction from digitised Invasive Electroencephalography (EEG data, a prediction algorithm, ASPPR (Advance Seizure Prediction via Pre-ictal Relabeling, is described. ASPPR facilitates the learning of predictive models targeted at recognizing patterns in EEG activity that are in a specific time window in advance of a seizure. It then exploits advanced machine learning coupled with the design and selection of appropriate features from EEG signals. Results, from evaluating ASPPR independently on 21 different patients, suggest that seizures for many patients can be predicted up to 20 minutes in advance of their onset. Compared to benchmark performance represented by a mean S1-Score (harmonic mean of Sensitivity and Specificity of 90.6% for predicting seizure onset between 0 and 5 minutes in advance, ASPPR achieves mean S1-Scores of: 96.30% for prediction between 1 and 6 minutes in advance, 96.13% for prediction between 8 and 13 minutes in advance, 94.5% for prediction between 14 and 19 minutes in advance, and 94.2% for prediction between 20 and 25 minutes in advance.

  14. Quadratic prediction of factor scores

    NARCIS (Netherlands)

    Wansbeek, T

    1999-01-01

    Factor scores are naturally predicted by means of their conditional expectation given the indicators y. Under normality this expectation is linear in y but in general it is an unknown function of y. II is discussed that under nonnormality factor scores can be more precisely predicted by a quadratic

  15. Predictions for Excited Strange Baryons

    Energy Technology Data Exchange (ETDEWEB)

    Fernando, Ishara P.; Goity, Jose L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-04-01

    An assessment is made of predictions for excited hyperon masses which follow from flavor symmetry and consistency with a 1/N c expansion of QCD. Such predictions are based on presently established baryonic resonances. Low lying hyperon resonances which do not seem to fit into the proposed scheme are discussed.

  16. Climate Prediction Center - Seasonal Outlook

    Science.gov (United States)

    Weather Service NWS logo - Click to go to the NWS home page Climate Prediction Center Site Map News Forecast Discussion PROGNOSTIC DISCUSSION FOR MONTHLY OUTLOOK NWS CLIMATE PREDICTION CENTER COLLEGE PARK MD INFLUENCE ON THE MONTHLY-AVERAGED CLIMATE. OUR MID-MONTH ASSESSMENT OF LOW-FREQUENCY CLIMATE VARIABILITY IS

  17. Dividend Predictability Around the World

    DEFF Research Database (Denmark)

    Rangvid, Jesper; Schmeling, Maik; Schrimpf, Andreas

    2014-01-01

    We show that dividend-growth predictability by the dividend yield is the rule rather than the exception in global equity markets. Dividend predictability is weaker, however, in large and developed markets where dividends are smoothed more, the typical firm is large, and volatility is lower. Our f...

  18. Dividend Predictability Around the World

    DEFF Research Database (Denmark)

    Rangvid, Jesper; Schmeling, Maik; Schrimpf, Andreas

    We show that dividend growth predictability by the dividend yield is the rule rather than the exception in global equity markets. Dividend predictability is weaker, however, in large and developed markets where dividends are smoothed more, the typical firm is large, and volatility is lower. Our f...

  19. Decadal climate prediction (project GCEP).

    Science.gov (United States)

    Haines, Keith; Hermanson, Leon; Liu, Chunlei; Putt, Debbie; Sutton, Rowan; Iwi, Alan; Smith, Doug

    2009-03-13

    Decadal prediction uses climate models forced by changing greenhouse gases, as in the International Panel for Climate Change, but unlike longer range predictions they also require initialization with observations of the current climate. In particular, the upper-ocean heat content and circulation have a critical influence. Decadal prediction is still in its infancy and there is an urgent need to understand the important processes that determine predictability on these timescales. We have taken the first Hadley Centre Decadal Prediction System (DePreSys) and implemented it on several NERC institute compute clusters in order to study a wider range of initial condition impacts on decadal forecasting, eventually including the state of the land and cryosphere. The eScience methods are used to manage submission and output from the many ensemble model runs required to assess predictive skill. Early results suggest initial condition skill may extend for several years, even over land areas, but this depends sensitively on the definition used to measure skill, and alternatives are presented. The Grid for Coupled Ensemble Prediction (GCEP) system will allow the UK academic community to contribute to international experiments being planned to explore decadal climate predictability.

  20. Prediction during natural language comprehension

    NARCIS (Netherlands)

    Willems, R.M.; Frank, S.L.; Nijhof, A.D.; Hagoort, P.; Bosch, A.P.J. van den

    2016-01-01

    The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investigated the neural basis of 2 distinct aspects of word prediction, derived from information theory, during story comprehension. We assessed the effect of entropy of next-word probability distributions as

  1. Reliability of windstorm predictions in the ECMWF ensemble prediction system

    Science.gov (United States)

    Becker, Nico; Ulbrich, Uwe

    2016-04-01

    Windstorms caused by extratropical cyclones are one of the most dangerous natural hazards in the European region. Therefore, reliable predictions of such storm events are needed. Case studies have shown that ensemble prediction systems (EPS) are able to provide useful information about windstorms between two and five days prior to the event. In this work, ensemble predictions with the European Centre for Medium-Range Weather Forecasts (ECMWF) EPS are evaluated in a four year period. Within the 50 ensemble members, which are initialized every 12 hours and are run for 10 days, windstorms are identified and tracked in time and space. By using a clustering approach, different predictions of the same storm are identified in the different ensemble members and compared to reanalysis data. The occurrence probability of the predicted storms is estimated by fitting a bivariate normal distribution to the storm track positions. Our results show, for example, that predicted storm clusters with occurrence probabilities of more than 50% have a matching observed storm in 80% of all cases at a lead time of two days. The predicted occurrence probabilities are reliable up to 3 days lead time. At longer lead times the occurrence probabilities are overestimated by the EPS.

  2. Psychometric prediction of penitentiary recidivism.

    Science.gov (United States)

    Medina García, Pedro M; Baños Rivera, Rosa M

    2016-05-01

    Attempts to predict prison recidivism based on the personality have not been very successful. This study aims to provide data on recidivism prediction based on the scores on a personality questionnaire. For this purpose, a predictive model combining the actuarial procedure with a posteriori probability was developed, consisting of the probabilistic calculation of the effective verification of the event once it has already occurred. Cuestionario de Personalidad Situacional (CPS; Fernández, Seisdedos, & Mielgo, 1998) was applied to 978 male inmates classified as recidivists or non-recidivists. High predictive power was achieved, with the area under the curve (AUC) of 0.85 (p <.001; Se = 0.012; 95% CI [0.826, 0.873]. The answers to the CPS items made it possible to properly discriminate 77.3% of the participants. These data indicate the important role of the personality as a key factor in understanding delinquency and predicting recidivism.

  3. Predictive Biomarkers for Asthma Therapy.

    Science.gov (United States)

    Medrek, Sarah K; Parulekar, Amit D; Hanania, Nicola A

    2017-09-19

    Asthma is a heterogeneous disease characterized by multiple phenotypes. Treatment of patients with severe disease can be challenging. Predictive biomarkers are measurable characteristics that reflect the underlying pathophysiology of asthma and can identify patients that are likely to respond to a given therapy. This review discusses current knowledge regarding predictive biomarkers in asthma. Recent trials evaluating biologic therapies targeting IgE, IL-5, IL-13, and IL-4 have utilized predictive biomarkers to identify patients who might benefit from treatment. Other work has suggested that using composite biomarkers may offer enhanced predictive capabilities in tailoring asthma therapy. Multiple biomarkers including sputum eosinophil count, blood eosinophil count, fractional concentration of nitric oxide in exhaled breath (FeNO), and serum periostin have been used to identify which patients will respond to targeted asthma medications. Further work is needed to integrate predictive biomarkers into clinical practice.

  4. Are abrupt climate changes predictable?

    Science.gov (United States)

    Ditlevsen, Peter

    2013-04-01

    It is taken for granted that the limited predictability in the initial value problem, the weather prediction, and the predictability of the statistics are two distinct problems. Lorenz (1975) dubbed this predictability of the first and the second kind respectively. Predictability of the first kind in a chaotic dynamical system is limited due to the well-known critical dependence on initial conditions. Predictability of the second kind is possible in an ergodic system, where either the dynamics is known and the phase space attractor can be characterized by simulation or the system can be observed for such long times that the statistics can be obtained from temporal averaging, assuming that the attractor does not change in time. For the climate system the distinction between predictability of the first and the second kind is fuzzy. This difficulty in distinction between predictability of the first and of the second kind is related to the lack of scale separation between fast and slow components of the climate system. The non-linear nature of the problem furthermore opens the possibility of multiple attractors, or multiple quasi-steady states. As the ice-core records show, the climate has been jumping between different quasi-stationary climates, stadials and interstadials through the Dansgaard-Oechger events. Such a jump happens very fast when a critical tipping point has been reached. The question is: Can such a tipping point be predicted? This is a new kind of predictability: the third kind. If the tipping point is reached through a bifurcation, where the stability of the system is governed by some control parameter, changing in a predictable way to a critical value, the tipping is predictable. If the sudden jump occurs because internal chaotic fluctuations, noise, push the system across a barrier, the tipping is as unpredictable as the triggering noise. In order to hint at an answer to this question, a careful analysis of the high temporal resolution NGRIP isotope

  5. Emerging approaches in predictive toxicology.

    Science.gov (United States)

    Zhang, Luoping; McHale, Cliona M; Greene, Nigel; Snyder, Ronald D; Rich, Ivan N; Aardema, Marilyn J; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan

    2014-12-01

    Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. © 2014 Wiley Periodicals, Inc.

  6. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  7. Collective motion of predictive swarms.

    Directory of Open Access Journals (Sweden)

    Nathaniel Rupprecht

    Full Text Available Theoretical models of populations and swarms typically start with the assumption that the motion of agents is governed by the local stimuli. However, an intelligent agent, with some understanding of the laws that govern its habitat, can anticipate the future, and make predictions to gather resources more efficiently. Here we study a specific model of this kind, where agents aim to maximize their consumption of a diffusing resource, by attempting to predict the future of a resource field and the actions of other agents. Once the agents make a prediction, they are attracted to move towards regions that have, and will have, denser resources. We find that the further the agents attempt to see into the future, the more their attempts at prediction fail, and the less resources they consume. We also study the case where predictive agents compete against non-predictive agents and find the predictors perform better than the non-predictors only when their relative numbers are very small. We conclude that predictivity pays off either when the predictors do not see too far into the future or the number of predictors is small.

  8. Dividend Predictability Around the World

    DEFF Research Database (Denmark)

    Rangvid, Jesper; Schrimpf, Andreas

    The common perception in the literature, mainly based on U.S. data, is that current dividend yields are uninformative about future dividends. We show that this nding changes substantially when looking at a broad international panel of countries, as aggregate dividend growth rates are found...... that in countries where the quality of institutions is high, dividend predictability is weaker. These ndings indicate that the apparent lack of dividend predictability in the U.S. does not, in general, extend to other countries. Rather, dividend predictability is driven by cross-country dierences in rm...

  9. The Theory of Linear Prediction

    CERN Document Server

    Vaidyanathan, PP

    2007-01-01

    Linear prediction theory has had a profound impact in the field of digital signal processing. Although the theory dates back to the early 1940s, its influence can still be seen in applications today. The theory is based on very elegant mathematics and leads to many beautiful insights into statistical signal processing. Although prediction is only a part of the more general topics of linear estimation, filtering, and smoothing, this book focuses on linear prediction. This has enabled detailed discussion of a number of issues that are normally not found in texts. For example, the theory of vecto

  10. Practical aspects of geological prediction

    International Nuclear Information System (INIS)

    Mallio, W.J.; Peck, J.H.

    1981-01-01

    Nuclear waste disposal requires that geology be a predictive science. The prediction of future events rests on (1) recognizing the periodicity of geologic events; (2) defining a critical dimension of effect, such as the area of a drainage basin, the length of a fault trace, etc; and (3) using our understanding of active processes the project the frequency and magnitude of future events in the light of geological principles. Of importance to nuclear waste disposal are longer term processes such as continental denudation and removal of materials by glacial erosion. Constant testing of projections will allow the practical limits of predicting geological events to be defined. 11 refs

  11. Adaptive filtering prediction and control

    CERN Document Server

    Goodwin, Graham C

    2009-01-01

    Preface1. Introduction to Adaptive TechniquesPart 1. Deterministic Systems2. Models for Deterministic Dynamical Systems3. Parameter Estimation for Deterministic Systems4. Deterministic Adaptive Prediction5. Control of Linear Deterministic Systems6. Adaptive Control of Linear Deterministic SystemsPart 2. Stochastic Systems7. Optimal Filtering and Prediction8. Parameter Estimation for Stochastic Dynamic Systems9. Adaptive Filtering and Prediction10. Control of Stochastic Systems11. Adaptive Control of Stochastic SystemsAppendicesA. A Brief Review of Some Results from Systems TheoryB. A Summary o

  12. Predicting emergency diesel starting performance

    International Nuclear Information System (INIS)

    DeBey, T.M.

    1989-01-01

    The US Department of Energy effort to extend the operational lives of commercial nuclear power plants has examined methods for predicting the performance of specific equipment. This effort focuses on performance prediction as a means for reducing equipment surveillance, maintenance, and outages. Realizing these goals will result in nuclear plants that are more reliable, have lower maintenance costs, and have longer lives. This paper describes a monitoring system that has been developed to predict starting performance in emergency diesels. A prototype system has been built and tested on an engine at Sandia National Laboratories. 2 refs

  13. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  14. Fatigue life prediction in composites

    CSIR Research Space (South Africa)

    Huston, RJ

    1994-01-01

    Full Text Available Because of the relatively large number of possible failure mechanisms in fibre reinforced composite materials, the prediction of fatigue life in a component is not a simple process. Several mathematical and statistical models have been proposed...

  15. Trading network predicts stock price.

    Science.gov (United States)

    Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi

    2014-01-16

    Stock price prediction is an important and challenging problem for studying financial markets. Existing studies are mainly based on the time series of stock price or the operation performance of listed company. In this paper, we propose to predict stock price based on investors' trading behavior. For each stock, we characterize the daily trading relationship among its investors using a trading network. We then classify the nodes of trading network into three roles according to their connectivity pattern. Strong Granger causality is found between stock price and trading relationship indices, i.e., the fraction of trading relationship among nodes with different roles. We further predict stock price by incorporating these trading relationship indices into a neural network based on time series of stock price. Experimental results on 51 stocks in two Chinese Stock Exchanges demonstrate the accuracy of stock price prediction is significantly improved by the inclusion of trading relationship indices.

  16. Prediction based on mean subset

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Brown, P. J.; Madsen, Henrik

    2002-01-01

    , it is found that the proposed mean subset method has superior prediction performance than prediction based on the best subset method, and in some settings also better than the ridge regression and lasso methods. The conclusions drawn from the Monte Carlo study is corroborated in an example in which prediction......Shrinkage methods have traditionally been applied in prediction problems. In this article we develop a shrinkage method (mean subset) that forms an average of regression coefficients from individual subsets of the explanatory variables. A Bayesian approach is taken to derive an expression of how...... the coefficient vectors from each subset should be weighted. It is not computationally feasible to calculate the mean subset coefficient vector for larger problems, and thus we suggest an algorithm to find an approximation to the mean subset coefficient vector. In a comprehensive Monte Carlo simulation study...

  17. EPRI MOV performance prediction program

    International Nuclear Information System (INIS)

    Hosler, J.F.; Damerell, P.S.; Eidson, M.G.; Estep, N.E.

    1994-01-01

    An overview of the EPRI Motor-Operated Valve (MOV) Performance Prediction Program is presented. The objectives of this Program are to better understand the factors affecting the performance of MOVs and to develop and validate methodologies to predict MOV performance. The Program involves valve analytical modeling, separate-effects testing to refine the models, and flow-loop and in-plant MOV testing to provide a basis for model validation. The ultimate product of the Program is an MOV Performance Prediction Methodology applicable to common gate, globe, and butterfly valves. The methodology predicts thrust and torque requirements at design-basis flow and differential pressure conditions, assesses the potential for gate valve internal damage, and provides test methods to quantify potential for gate valve internal damage, and provides test methods to quantify potential variations in actuator output thrust with loading condition. Key findings and their potential impact on MOV design and engineering application are summarized

  18. In silico prediction of genotoxicity.

    Science.gov (United States)

    Wichard, Jörg D

    2017-08-01

    The in silico prediction of genotoxicity has made considerable progress during the last years. The main driver for the pharmaceutical industry is the ICH M7 guideline about the assessment of DNA reactive impurities. An important component of this guideline is the use of in silico models as an alternative approach to experimental testing. The in silico prediction of genotoxicity provides an established and accepted method that defines the first step in the assessment of DNA reactive impurities. This was made possible by the growing amount of reliable Ames screening data, the attempts to understand the activity pathways and the subsequent development of computer-based prediction systems. This paper gives an overview of how the in silico prediction of genotoxicity is performed under the ICH M7 guideline. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. New Tool to Predict Glaucoma

    Science.gov (United States)

    ... In This Section A New Tool to Predict Glaucoma email Send this article to a friend by ... Close Send Thanks for emailing that article! Tweet Glaucoma can be difficult to detect and diagnose. Measurement ...

  20. Dynamical Predictability of Monthly Means.

    Science.gov (United States)

    Shukla, J.

    1981-12-01

    We have attempted to determine the theoretical upper limit of dynamical predictability of monthly means for prescribed nonfluctuating external forcings. We have extended the concept of `classical' predictability, which primarily refers to the lack of predictability due mainly to the instabilities of synoptic-scale disturbances, to the predictability of time averages, which are determined by the predictability of low-frequency planetary waves. We have carded out 60-day integrations of a global general circulation model with nine different initial conditions but identical boundary conditions of sea surface temperature, snow, sea ice and soil moisture. Three of these initial conditions are the observed atmospheric conditions on 1 January of 1975, 1976 and 1977. The other six initial conditions are obtained by superimposing over the observed initial conditions a random perturbation comparable to the errors of observation. The root-mean-square (rms) error of random perturbations at all the grid points and all the model levels is 3 m s1 in u and v components of wind. The rms vector wind error between the observed initial conditions is >15 m s1.It is hypothesized that for a given averaging period, if the rms error among the time averages predicted from largely different initial conditions becomes comparable to the rms error among the time averages predicted from randomly perturbed initial conditions, the time averages are dynamically unpredictable. We have carried out the analysis of variance to compare the variability, among the three groups, due to largely different initial conditions, and within each group due to random perturbations.It is found that the variances among the first 30-day means, predicted from largely different initial conditions, are significantly different from the variances due to random perturbations in the initial conditions, whereas the variances among 30-day means for days 31-60 are not distinguishable from the variances due to random initial

  1. Predictive coding in Agency Detection

    DEFF Research Database (Denmark)

    Andersen, Marc Malmdorf

    2017-01-01

    Agency detection is a central concept in the cognitive science of religion (CSR). Experimental studies, however, have so far failed to lend support to some of the most common predictions that follow from current theories on agency detection. In this article, I argue that predictive coding, a highly...... promising new framework for understanding perception and action, may solve pending theoretical inconsistencies in agency detection research, account for the puzzling experimental findings mentioned above, and provide hypotheses for future experimental testing. Predictive coding explains how the brain......, unbeknownst to consciousness, engages in sophisticated Bayesian statistics in an effort to constantly predict the hidden causes of sensory input. My fundamental argument is that most false positives in agency detection can be seen as the result of top-down interference in a Bayesian system generating high...

  2. Time-predictable Stack Caching

    DEFF Research Database (Denmark)

    Abbaspourseyedi, Sahar

    completely. Thus, in systems with hard deadlines the worst-case execution time (WCET) of the real-time software running on them needs to be bounded. Modern architectures use features such as pipelining and caches for improving the average performance. These features, however, make the WCET analysis more...... addresses, provides an opportunity to predict and tighten the WCET of accesses to data in caches. In this thesis, we introduce the time-predictable stack cache design and implementation within a time-predictable processor. We introduce several optimizations to our design for tightening the WCET while...... keeping the timepredictability of the design intact. Moreover, we provide a solution for reducing the cost of context switching in a system using the stack cache. In design of these caches, we use custom hardware and compiler support for delivering time-predictable stack data accesses. Furthermore...

  3. NASA/MSFC prediction techniques

    International Nuclear Information System (INIS)

    Smith, R.E.

    1987-01-01

    The NASA/MSFC method of forecasting is more formal than NOAA's. The data are smoothed by the Lagrangian method and linear regression prediction techniques are used. The solar activity period is fixed at 11 years--the mean period of all previous cycles. Interestingly, the present prediction for the time of the next solar minimum is February or March of 1987, which, within the uncertainties of two methods, can be taken to be the same as the NOAA result

  4. Prediction of molecular crystal structures

    International Nuclear Information System (INIS)

    Beyer, Theresa

    2001-01-01

    The ab initio prediction of molecular crystal structures is a scientific challenge. Reliability of first-principle prediction calculations would show a fundamental understanding of crystallisation. Crystal structure prediction is also of considerable practical importance as different crystalline arrangements of the same molecule in the solid state (polymorphs)are likely to have different physical properties. A method of crystal structure prediction based on lattice energy minimisation has been developed in this work. The choice of the intermolecular potential and of the molecular model is crucial for the results of such studies and both of these criteria have been investigated. An empirical atom-atom repulsion-dispersion potential for carboxylic acids has been derived and applied in a crystal structure prediction study of formic, benzoic and the polymorphic system of tetrolic acid. As many experimental crystal structure determinations at different temperatures are available for the polymorphic system of paracetamol (acetaminophen), the influence of the variations of the molecular model on the crystal structure lattice energy minima, has also been studied. The general problem of prediction methods based on the assumption that the experimental thermodynamically stable polymorph corresponds to the global lattice energy minimum, is that more hypothetical low lattice energy structures are found within a few kJ mol -1 of the global minimum than are likely to be experimentally observed polymorphs. This is illustrated by the results for molecule I, 3-oxabicyclo(3.2.0)hepta-1,4-diene, studied for the first international blindtest for small organic crystal structures organised by the Cambridge Crystallographic Data Centre (CCDC) in May 1999. To reduce the number of predicted polymorphs, additional factors to thermodynamic criteria have to be considered. Therefore the elastic constants and vapour growth morphologies have been calculated for the lowest lattice energy

  5. Does Carbon Dioxide Predict Temperature?

    OpenAIRE

    Mytty, Tuukka

    2013-01-01

    Does carbon dioxide predict temperature? No it does not, in the time period of 1880-2004 with the carbon dioxide and temperature data used in this thesis. According to the Inter Governmental Panel on Climate Change(IPCC) carbon dioxide is the most important factor in raising the global temperature. Therefore, it is reasonable to assume that carbon dioxide truly predicts temperature. Because this paper uses observational data it has to be kept in mind that no causality interpretation can be ma...

  6. Prediction of molecular crystal structures

    Energy Technology Data Exchange (ETDEWEB)

    Beyer, Theresa

    2001-07-01

    The ab initio prediction of molecular crystal structures is a scientific challenge. Reliability of first-principle prediction calculations would show a fundamental understanding of crystallisation. Crystal structure prediction is also of considerable practical importance as different crystalline arrangements of the same molecule in the solid state (polymorphs)are likely to have different physical properties. A method of crystal structure prediction based on lattice energy minimisation has been developed in this work. The choice of the intermolecular potential and of the molecular model is crucial for the results of such studies and both of these criteria have been investigated. An empirical atom-atom repulsion-dispersion potential for carboxylic acids has been derived and applied in a crystal structure prediction study of formic, benzoic and the polymorphic system of tetrolic acid. As many experimental crystal structure determinations at different temperatures are available for the polymorphic system of paracetamol (acetaminophen), the influence of the variations of the molecular model on the crystal structure lattice energy minima, has also been studied. The general problem of prediction methods based on the assumption that the experimental thermodynamically stable polymorph corresponds to the global lattice energy minimum, is that more hypothetical low lattice energy structures are found within a few kJ mol{sup -1} of the global minimum than are likely to be experimentally observed polymorphs. This is illustrated by the results for molecule I, 3-oxabicyclo(3.2.0)hepta-1,4-diene, studied for the first international blindtest for small organic crystal structures organised by the Cambridge Crystallographic Data Centre (CCDC) in May 1999. To reduce the number of predicted polymorphs, additional factors to thermodynamic criteria have to be considered. Therefore the elastic constants and vapour growth morphologies have been calculated for the lowest lattice energy

  7. Prediction of interannual climate variations

    International Nuclear Information System (INIS)

    Shukla, J.

    1993-01-01

    It has been known for some time that the behavior of the short-term fluctuations of the earth's atmosphere resembles that of a chaotic non-linear dynamical system, and that the day-to-day weather cannot be predicted beyond a few weeks. However, it has also been found that the interactions of the atmosphere with the underlying oceans and the land surfaces can produce fluctuations whose time scales are much longer than the limits of deterministic prediction of weather. It is, therefore, natural to ask whether it is possible that the seasonal and longer time averages of climate fluctuations can be predicted with sufficient skill to be beneficial for social and economic applications, even though the details of day-to-day weather cannot be predicted beyond a few weeks. The main objective of the workshop was to address this question by assessing the current state of knowledge on predictability of seasonal and interannual climate variability and to investigate various possibilities for its prediction. (orig./KW)

  8. Postprocessing for Air Quality Predictions

    Science.gov (United States)

    Delle Monache, L.

    2017-12-01

    In recent year, air quality (AQ) forecasting has made significant progress towards better predictions with the goal of protecting the public from harmful pollutants. This progress is the results of improvements in weather and chemical transport models, their coupling, and more accurate emission inventories (e.g., with the development of new algorithms to account in near real-time for fires). Nevertheless, AQ predictions are still affected at times by significant biases which stem from limitations in both weather and chemistry transport models. Those are the result of numerical approximations and the poor representation (and understanding) of important physical and chemical process. Moreover, although the quality of emission inventories has been significantly improved, they are still one of the main sources of uncertainties in AQ predictions. For operational real-time AQ forecasting, a significant portion of these biases can be reduced with the implementation of postprocessing methods. We will review some of the techniques that have been proposed to reduce both systematic and random errors of AQ predictions, and improve the correlation between predictions and observations of ground-level ozone and surface particulate matter less than 2.5 µm in diameter (PM2.5). These methods, which can be applied to both deterministic and probabilistic predictions, include simple bias-correction techniques, corrections inspired by the Kalman filter, regression methods, and the more recently developed analog-based algorithms. These approaches will be compared and contrasted, and strength and weaknesses of each will be discussed.

  9. Predictive value of diminutive colonic adenoma trial: the PREDICT trial.

    Science.gov (United States)

    Schoenfeld, Philip; Shad, Javaid; Ormseth, Eric; Coyle, Walter; Cash, Brooks; Butler, James; Schindler, William; Kikendall, Walter J; Furlong, Christopher; Sobin, Leslie H; Hobbs, Christine M; Cruess, David; Rex, Douglas

    2003-05-01

    Diminutive adenomas (1-9 mm in diameter) are frequently found during colon cancer screening with flexible sigmoidoscopy (FS). This trial assessed the predictive value of these diminutive adenomas for advanced adenomas in the proximal colon. In a multicenter, prospective cohort trial, we matched 200 patients with normal FS and 200 patients with diminutive adenomas on FS for age and gender. All patients underwent colonoscopy. The presence of advanced adenomas (adenoma >or= 10 mm in diameter, villous adenoma, adenoma with high grade dysplasia, and colon cancer) and adenomas (any size) was recorded. Before colonoscopy, patients completed questionnaires about risk factors for adenomas. The prevalence of advanced adenomas in the proximal colon was similar in patients with diminutive adenomas and patients with normal FS (6% vs. 5.5%, respectively) (relative risk, 1.1; 95% confidence interval [CI], 0.5-2.6). Diminutive adenomas on FS did not accurately predict advanced adenomas in the proximal colon: sensitivity, 52% (95% CI, 32%-72%); specificity, 50% (95% CI, 49%-51%); positive predictive value, 6% (95% CI, 4%-8%); and negative predictive value, 95% (95% CI, 92%-97%). Male gender (odds ratio, 1.63; 95% CI, 1.01-2.61) was associated with an increased risk of proximal colon adenomas. Diminutive adenomas on sigmoidoscopy may not accurately predict advanced adenomas in the proximal colon.

  10. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  11. Climate Prediction - NOAA's National Weather Service

    Science.gov (United States)

    Statistical Models... MOS Prod GFS-LAMP Prod Climate Past Weather Predictions Weather Safety Weather Radio National Weather Service on FaceBook NWS on Facebook NWS Director Home > Climate > Predictions Climate Prediction Long range forecasts across the U.S. Climate Prediction Web Sites Climate Prediction

  12. Weighted-Average Least Squares Prediction

    NARCIS (Netherlands)

    Magnus, Jan R.; Wang, Wendun; Zhang, Xinyu

    2016-01-01

    Prediction under model uncertainty is an important and difficult issue. Traditional prediction methods (such as pretesting) are based on model selection followed by prediction in the selected model, but the reported prediction and the reported prediction variance ignore the uncertainty from the

  13. Potential Predictability and Prediction Skill for Southern Peru Summertime Rainfall

    Science.gov (United States)

    WU, S.; Notaro, M.; Vavrus, S. J.; Mortensen, E.; Block, P. J.; Montgomery, R. J.; De Pierola, J. N.; Sanchez, C.

    2016-12-01

    The central Andes receive over 50% of annual climatological rainfall during the short period of January-March. This summertime rainfall exhibits strong interannual and decadal variability, including severe drought events that incur devastating societal impacts and cause agricultural communities and mining facilities to compete for limited water resources. An improved seasonal prediction skill of summertime rainfall would aid in water resource planning and allocation across the water-limited southern Peru. While various underlying mechanisms have been proposed by past studies for the drivers of interannual variability in summertime rainfall across southern Peru, such as the El Niño-Southern Oscillation (ENSO), Madden Julian Oscillation (MJO), and extratropical forcings, operational forecasts continue to be largely based on rudimentary ENSO-based indices, such as NINO3.4, justifying further exploration of predictive skill. In order to bridge this gap between the understanding of driving mechanisms and the operational forecast, we performed systematic studies on the predictability and prediction skill of southern Peru summertime rainfall by constructing statistical forecast models using best available weather station and reanalysis datasets. At first, by assuming the first two empirical orthogonal functions (EOFs) of summertime rainfall are predictable, the potential predictability skill was evaluated for southern Peru. Then, we constructed a simple regression model, based on the time series of tropical Pacific sea-surface temperatures (SSTs), and a more advanced Linear Inverse Model (LIM), based on the EOFs of tropical ocean SSTs and large-scale atmosphere variables from reanalysis. Our results show that the LIM model consistently outperforms the more rudimentary regression models on the forecast skill of domain averaged precipitation index and individual station indices. The improvement of forecast correlation skill ranges from 10% to over 200% for different

  14. Prediction of GNSS satellite clocks

    International Nuclear Information System (INIS)

    Broederbauer, V.

    2010-01-01

    This thesis deals with the characterisation and prediction of GNSS-satellite-clocks. A prerequisite to develop powerful algorithms for the prediction of clock-corrections is the thorough study of the behaviour of the different clock-types of the satellites. In this context the predicted part of the IGU-clock-corrections provided by the Analysis Centers (ACs) of the IGS was compared to the IGS-Rapid-clock solutions to determine reasonable estimates of the quality of already existing well performing predictions. For the shortest investigated interval (three hours) all ACs obtain almost the same accuracy of 0,1 to 0,4 ns. For longer intervals the individual predictions results start to diverge. Thus, for a 12-hours- interval the differences range from nearly 10 ns (GFZ, CODE) until up to some 'tens of ns'. Based on the estimated clock corrections provided via the IGS Rapid products a simple quadratic polynomial turns out to be sufficient to describe the time series of Rubidium-clocks. On the other hand Cesium-clocks show a periodical behaviour (revolution period) with an amplitude of up to 6 ns. A clear correlation between these amplitudes and the Sun elevation angle above the orbital planes can be demonstrated. The variability of the amplitudes is supposed to be caused by temperature-variations affecting the oscillator. To account for this periodical behaviour a quadratic polynomial with an additional sinus-term was finally chosen as prediction model both for the Cesium as well as for the Rubidium clocks. The three polynomial-parameters as well as amplitude and phase shift of the periodic term are estimated within a least-square-adjustment by means of program GNSS-VC/static. Input-data are time series of the observed part of the IGU clock corrections. With the estimated parameters clock-corrections are predicted for various durations. The mean error of the prediction of Rubidium-clock-corrections for an interval of six hours reaches up to 1,5 ns. For the 12-hours

  15. Geophysical Anomalies and Earthquake Prediction

    Science.gov (United States)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  16. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  17. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  18. Quantifying prognosis with risk predictions.

    Science.gov (United States)

    Pace, Nathan L; Eberhart, Leopold H J; Kranke, Peter R

    2012-01-01

    Prognosis is a forecast, based on present observations in a patient, of their probable outcome from disease, surgery and so on. Research methods for the development of risk probabilities may not be familiar to some anaesthesiologists. We briefly describe methods for identifying risk factors and risk scores. A probability prediction rule assigns a risk probability to a patient for the occurrence of a specific event. Probability reflects the continuum between absolute certainty (Pi = 1) and certified impossibility (Pi = 0). Biomarkers and clinical covariates that modify risk are known as risk factors. The Pi as modified by risk factors can be estimated by identifying the risk factors and their weighting; these are usually obtained by stepwise logistic regression. The accuracy of probabilistic predictors can be separated into the concepts of 'overall performance', 'discrimination' and 'calibration'. Overall performance is the mathematical distance between predictions and outcomes. Discrimination is the ability of the predictor to rank order observations with different outcomes. Calibration is the correctness of prediction probabilities on an absolute scale. Statistical methods include the Brier score, coefficient of determination (Nagelkerke R2), C-statistic and regression calibration. External validation is the comparison of the actual outcomes to the predicted outcomes in a new and independent patient sample. External validation uses the statistical methods of overall performance, discrimination and calibration and is uniformly recommended before acceptance of the prediction model. Evidence from randomised controlled clinical trials should be obtained to show the effectiveness of risk scores for altering patient management and patient outcomes.

  19. PREDICTING DEMAND FOR COTTON YARNS

    Directory of Open Access Journals (Sweden)

    SALAS-MOLINA Francisco

    2017-05-01

    Full Text Available Predicting demand for fashion products is crucial for textile manufacturers. In an attempt to both avoid out-of-stocks and minimize holding costs, different forecasting techniques are used by production managers. Both linear and non-linear time-series analysis techniques are suitable options for forecasting purposes. However, demand for fashion products presents a number of particular characteristics such as short life-cycles, short selling seasons, high impulse purchasing, high volatility, low predictability, tremendous product variety and a high number of stock-keeping-units. In this paper, we focus on predicting demand for cotton yarns using a non-linear forecasting technique that has been fruitfully used in many areas, namely, random forests. To this end, we first identify a number of explanatory variables to be used as a key input to forecasting using random forests. We consider explanatory variables usually labeled either as causal variables, when some correlation is expected between them and the forecasted variable, or as time-series features, when extracted from time-related attributes such as seasonality. Next, we evaluate the predictive power of each variable by means of out-of-sample accuracy measurement. We experiment on a real data set from a textile company in Spain. The numerical results show that simple time-series features present more predictive ability than other more sophisticated explanatory variables.

  20. Lightning prediction using radiosonde data

    Energy Technology Data Exchange (ETDEWEB)

    Weng, L.Y.; Bin Omar, J.; Siah, Y.K.; Bin Zainal Abidin, I.; Ahmad, S.K. [Univ. Tenaga, Darul Ehsan (Malaysia). College of Engineering

    2008-07-01

    Lightning is a natural phenomenon in tropical regions. Malaysia experiences very high cloud-to-ground lightning density, posing both health and economic concerns to individuals and industries. In the commercial sector, power lines, telecommunication towers and buildings are most frequently hit by lightning. In the event that a power line is hit and the protection system fails, industries which rely on that power line would cease operations temporarily, resulting in significant monetary loss. Current technology is unable to prevent lightning occurrences. However, the ability to predict lightning would significantly reduce damages from direct and indirect lightning strikes. For that reason, this study focused on developing a method to predict lightning with radiosonde data using only a simple back propagation neural network model written in C code. The study was performed at the Kuala Lumpur International Airport (KLIA). In this model, the parameters related to wind were disregarded. Preliminary results indicate that this method shows some positive results in predicting lighting. However, a larger dataset is needed in order to obtain more accurate predictions. It was concluded that future work should include wind parameters to fully capture all properties for lightning formation, subsequently its prediction. 8 refs., 5 figs.

  1. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  2. Intelligent Prediction of Ship Maneuvering

    Directory of Open Access Journals (Sweden)

    Miroslaw Lacki

    2016-09-01

    Full Text Available In this paper the author presents an idea of the intelligent ship maneuvering prediction system with the usage of neuroevolution. This may be also be seen as the ship handling system that simulates a learning process of an autonomous control unit, created with artificial neural network. The control unit observes input signals and calculates the values of required parameters of the vessel maneuvering in confined waters. In neuroevolution such units are treated as individuals in population of artificial neural networks, which through environmental sensing and evolutionary algorithms learn to perform given task efficiently. The main task of the system is to learn continuously and predict the values of a navigational parameters of the vessel after certain amount of time, regarding an influence of its environment. The result of a prediction may occur as a warning to navigator to aware him about incoming threat.

  3. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  4. Sentence-Level Attachment Prediction

    Science.gov (United States)

    Albakour, M.-Dyaa; Kruschwitz, Udo; Lucas, Simon

    Attachment prediction is the task of automatically identifying email messages that should contain an attachment. This can be useful to tackle the problem of sending out emails but forgetting to include the relevant attachment (something that happens all too often). A common Information Retrieval (IR) approach in analyzing documents such as emails is to treat the entire document as a bag of words. Here we propose a finer-grained analysis to address the problem. We aim at identifying individual sentences within an email that refer to an attachment. If we detect any such sentence, we predict that the email should have an attachment. Using part of the Enron corpus for evaluation we find that our finer-grained approach outperforms previously reported document-level attachment prediction in similar evaluation settings.

  5. BBN predictions for 4He

    International Nuclear Information System (INIS)

    Walker, T.P.

    1993-01-01

    The standard model of the hot big bang assumes a homogeneous and isotropic Universe with gravity described by General Relativity and strong and electroweak interactions described by the Standard Model of particle physics. The hot big bang model makes the unavoidable prediction that the production of primordial elements occurred about one minute after the big band (referred to as big bang or primordial nucleosynthesis BBN). This review concerns the range of the primordial abundance of 4 He as predicted by standard BBN (i.e., primordial nucleosynthesis assuming a homogeneous distribution of baryons). In it the author discusses: (1) Uncertainties in the calculation of Y p (the mass fraction of primordial 4 He), (2) The expected range of Y p , (3) How the predictions stack up against the latest observations, and (4) The latest BBN bounds on Ω B h 2 and N ν . 13 refs., 2 figs

  6. Human motion simulation predictive dynamics

    CERN Document Server

    Abdel-Malek, Karim

    2013-01-01

    Simulate realistic human motion in a virtual world with an optimization-based approach to motion prediction. With this approach, motion is governed by human performance measures, such as speed and energy, which act as objective functions to be optimized. Constraints on joint torques and angles are imposed quite easily. Predicting motion in this way allows one to use avatars to study how and why humans move the way they do, given specific scenarios. It also enables avatars to react to infinitely many scenarios with substantial autonomy. With this approach it is possible to predict dynamic motion without having to integrate equations of motion -- rather than solving equations of motion, this approach solves for a continuous time-dependent curve characterizing joint variables (also called joint profiles) for every degree of freedom. Introduces rigorous mathematical methods for digital human modelling and simulation Focuses on understanding and representing spatial relationships (3D) of biomechanics Develops an i...

  7. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  8. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  9. Ensemble method for dengue prediction.

    Science.gov (United States)

    Buczak, Anna L; Baugher, Benjamin; Moniz, Linda J; Bagley, Thomas; Babin, Steven M; Guven, Erhan

    2018-01-01

    In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico) during four dengue seasons: 1) peak height (i.e., maximum weekly number of cases during a transmission season; 2) peak week (i.e., week in which the maximum weekly number of cases occurred); and 3) total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date. Our approach used ensemble models created by combining three disparate types of component models: 1) two-dimensional Method of Analogues models incorporating both dengue and climate data; 2) additive seasonal Holt-Winters models with and without wavelet smoothing; and 3) simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations. Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week. The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  10. Evoked emotions predict food choice.

    Science.gov (United States)

    Dalenberg, Jelle R; Gutjar, Swetlana; Ter Horst, Gert J; de Graaf, Kees; Renken, Remco J; Jager, Gerry

    2014-01-01

    In the current study we show that non-verbal food-evoked emotion scores significantly improve food choice prediction over merely liking scores. Previous research has shown that liking measures correlate with choice. However, liking is no strong predictor for food choice in real life environments. Therefore, the focus within recent studies shifted towards using emotion-profiling methods that successfully can discriminate between products that are equally liked. However, it is unclear how well scores from emotion-profiling methods predict actual food choice and/or consumption. To test this, we proposed to decompose emotion scores into valence and arousal scores using Principal Component Analysis (PCA) and apply Multinomial Logit Models (MLM) to estimate food choice using liking, valence, and arousal as possible predictors. For this analysis, we used an existing data set comprised of liking and food-evoked emotions scores from 123 participants, who rated 7 unlabeled breakfast drinks. Liking scores were measured using a 100-mm visual analogue scale, while food-evoked emotions were measured using 2 existing emotion-profiling methods: a verbal and a non-verbal method (EsSense Profile and PrEmo, respectively). After 7 days, participants were asked to choose 1 breakfast drink from the experiment to consume during breakfast in a simulated restaurant environment. Cross validation showed that we were able to correctly predict individualized food choice (1 out of 7 products) for over 50% of the participants. This number increased to nearly 80% when looking at the top 2 candidates. Model comparisons showed that evoked emotions better predict food choice than perceived liking alone. However, the strongest predictive strength was achieved by the combination of evoked emotions and liking. Furthermore we showed that non-verbal food-evoked emotion scores more accurately predict food choice than verbal food-evoked emotions scores.

  11. Ensemble method for dengue prediction.

    Directory of Open Access Journals (Sweden)

    Anna L Buczak

    Full Text Available In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico during four dengue seasons: 1 peak height (i.e., maximum weekly number of cases during a transmission season; 2 peak week (i.e., week in which the maximum weekly number of cases occurred; and 3 total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date.Our approach used ensemble models created by combining three disparate types of component models: 1 two-dimensional Method of Analogues models incorporating both dengue and climate data; 2 additive seasonal Holt-Winters models with and without wavelet smoothing; and 3 simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations.Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week.The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  12. Dinosaur fossils predict body temperatures.

    Directory of Open Access Journals (Sweden)

    James F Gillooly

    2006-07-01

    Full Text Available Perhaps the greatest mystery surrounding dinosaurs concerns whether they were endotherms, ectotherms, or some unique intermediate form. Here we present a model that yields estimates of dinosaur body temperature based on ontogenetic growth trajectories obtained from fossil bones. The model predicts that dinosaur body temperatures increased with body mass from approximately 25 degrees C at 12 kg to approximately 41 degrees C at 13,000 kg. The model also successfully predicts observed increases in body temperature with body mass for extant crocodiles. These results provide direct evidence that dinosaurs were reptiles that exhibited inertial homeothermy.

  13. Calorimetry end-point predictions

    International Nuclear Information System (INIS)

    Fox, M.A.

    1981-01-01

    This paper describes a portion of the work presently in progress at Rocky Flats in the field of calorimetry. In particular, calorimetry end-point predictions are outlined. The problems associated with end-point predictions and the progress made in overcoming these obstacles are discussed. The two major problems, noise and an accurate description of the heat function, are dealt with to obtain the most accurate results. Data are taken from an actual calorimeter and are processed by means of three different noise reduction techniques. The processed data are then utilized by one to four algorithms, depending on the accuracy desired to determined the end-point

  14. Prediction of eyespot infection risks

    Directory of Open Access Journals (Sweden)

    M. Váòová

    2012-12-01

    Full Text Available The objective of the study was to design a prediction model for eyespot (Tapesia yallundae infection based on climatic factors (temperature, precipitation, air humidity. Data from experiment years 1994-2002 were used to study correlations between the eyespot infection index and individual weather characteristics. The model of prediction was constructed using multiple regression when a separate parameter is assigned to each factor, i.e. the frequency of days with optimum temperatures, humidity, and precipitation. The correlation between relative air humidity and precipitation and the infection index is significant.

  15. Can we predict nuclear proliferation

    International Nuclear Information System (INIS)

    Tertrais, Bruno

    2011-01-01

    The author aims at improving nuclear proliferation prediction capacities, i.e. the capacities to identify countries susceptible to acquire nuclear weapons, to interpret sensitive activities, and to assess nuclear program modalities. He first proposes a retrospective assessment of counter-proliferation actions since 1945. Then, based on academic studies, he analyzes what causes and motivates proliferation, with notably the possibility of existence of a chain phenomenon (mechanisms driving from one program to another). He makes recommendations for a global approach to proliferation prediction, and proposes proliferation indices and indicators

  16. CERAPP: Collaborative Estrogen Receptor Activity Prediction Project

    Data.gov (United States)

    U.S. Environmental Protection Agency — Data from a large-scale modeling project called CERAPP (Collaborative Estrogen Receptor Activity Prediction Project) demonstrating using predictive computational...

  17. The Challenge of Weather Prediction

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. The Challenge of Weather Prediction Old and Modern Ways of Weather Forecasting. B N Goswami. Series Article Volume 2 Issue 3 March 1997 pp 8-15. Fulltext. Click here to view fulltext PDF. Permanent link:

  18. Predictability of weather and climate

    National Research Council Canada - National Science Library

    Palmer, Tim; Hagedorn, Renate

    2006-01-01

    ... and anthropogenic climate change are among those included. Ensemble systems for forecasting predictability are discussed extensively. Ed Lorenz, father of chaos theory, makes a contribution to theoretical analysis with a previously unpublished paper. This well-balanced volume will be a valuable resource for many years. High-quality chapter autho...

  19. Evaluation of environmental impact predictions

    International Nuclear Information System (INIS)

    Cunningham, P.A.; Adams, S.M.; Kumar, K.D.

    1977-01-01

    An analysis and evaluation of the ecological monitoring program at the Surry Nuclear Power Plant showed that predictions of potential environmental impact made in the Final Environmental Statement (FES), which were based on generally accepted ecological principles, were not completely substantiated by environmental monitoring data. The Surry Nuclear Power Plant (Units 1 and 2) was chosen for study because of the facility's relatively continuous operating history and the availability of environmental data adequate for analysis. Preoperational and operational fish monitoring data were used to assess the validity of the FES prediction that fish would congregate in the thermal plume during winter months and would avoid the plume during summer months. Analysis of monitoring data showed that fish catch per unit effort (CPE) was generally high in the thermal plume during winter months; however, the highest fish catches occurred in the plume during the summer. Possible explanations for differences between the FES prediction and results observed in analysis of monitoring data are discussed, and general recommendations are outlined for improving impact assessment predictions

  20. Using Predictability for Lexical Segmentation.

    Science.gov (United States)

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  1. Solution Patterns Predicting Pythagorean Triples

    Science.gov (United States)

    Ezenweani, Ugwunna Louis

    2013-01-01

    Pythagoras Theorem is an old mathematical treatise that has traversed the school curricula from secondary to tertiary levels. The patterns it produced are quite interesting that many researchers have tried to generate a kind of predictive approach to identifying triples. Two attempts, namely Diophantine equation and Brahmagupta trapezium presented…

  2. Predicting response to epigenetic therapy

    DEFF Research Database (Denmark)

    Treppendahl, Marianne B; Sommer Kristensen, Lasse; Grønbæk, Kirsten

    2014-01-01

    of good pretreatment predictors of response is of great value. Many clinical parameters and molecular targets have been tested in preclinical and clinical studies with varying results, leaving room for optimization. Here we provide an overview of markers that may predict the efficacy of FDA- and EMA...

  3. Predicting Volleyball Serve-Reception

    NARCIS (Netherlands)

    Paulo, Ana; Zaal, Frank T J M; Fonseca, Sofia; Araujo, Duarte

    2016-01-01

    Serve and serve-reception performance have predicted success in volleyball. Given the impact of serve-reception on the game, we aimed at understanding what it is in the serve and receiver's actions that determines the selection of the type of pass used in serve-reception and its efficacy. Four

  4. Prediction of electric vehicle penetration.

    Science.gov (United States)

    2017-05-01

    The object of this report is to present the current market status of plug-in-electric : vehicles (PEVs) and to predict their future penetration within the world and U.S. : markets. The sales values for 2016 show a strong year of PEV sales both in the...

  5. Evoked Emotions Predict Food Choice

    NARCIS (Netherlands)

    Dalenberg, Jelle R.; Gutjar, Swetlana; ter Horst, Gert J.; de Graaf, Kees; Renken, Remco J.; Jager, Gerry

    2014-01-01

    In the current study we show that non-verbal food-evoked emotion scores significantly improve food choice prediction over merely liking scores. Previous research has shown that liking measures correlate with choice. However, liking is no strong predictor for food choice in real life environments.

  6. Framework for Traffic Congestion Prediction

    NARCIS (Netherlands)

    Zaki, J.F.W.; Ali-Eldin, A.M.T.; Hussein, S.E.; Saraya, S.F.; Areed, F.F.

    2016-01-01

    Traffic Congestion is a complex dilemma facing most major cities. It has undergone a lot of research since the early 80s in an attempt to predict traffic in the short-term. Recently, Intelligent Transportation Systems (ITS) became an integral part of traffic research which helped in modeling and

  7. Predicting Character Traits Through Reddit

    Science.gov (United States)

    2015-01-01

    and even employers (Res). Companies like Netflix also use personality classification algorithms in order to provide users with predictions of movies...science behind the netflix algorithms that decide what to watch next, August 2013. Reza Zafarani and Huan Liu. Evaluation without ground truth in social media research. Communications Of The ACM, 58(6):54–60, June 2015. 12

  8. Prediction of natural gas consumption

    International Nuclear Information System (INIS)

    Zhang, R.L.; Walton, D.J.; Hoskins, W.D.

    1993-01-01

    Distributors of natural gas need to predict future consumption in order to purchase a sufficient supply on contract. Distributors that offer their customers equal payment plans need to predict the consumption of each customer 12 months in advance. Estimates of previous consumption are often used for months when meters are inaccessible, or bimonthly-read meters. Existing methods of predicting natural gas consumption, and a proposed new method for each local region are discussed. The proposed model distinguishes the consumption load factors from summer to other seasons by attempting to adjust them by introducing two parameters. The problem is then reduced to a quadratic programming problem. However, since it is not necessary to use both parameters simultaneously, the problem can be solved with a simple iterative procedure. Results show that the new model can improve the two-equation model to a certain scale. The adjustment to heat load factor can reduce the error of prediction markedly while that to base load factor influences the error marginally. 3 refs., 11 figs., 2 tabs

  9. Prediction of Subsidence Depression Development

    Czech Academy of Sciences Publication Activity Database

    Doležalová, Hana; Kajzar, Vlastimil

    2017-01-01

    Roč. 6, č. 4 (2017), s. 208-214 E-ISSN 2391-9361. [Cross-border Exchange of Experience in Production Engineering Using Principles of Mathematics. Rybnik, 07.06.2017-09.06.2017] Institutional support: RVO:68145535 Keywords : undermining * prediction * regression analysis Subject RIV: DH - Mining, incl. Coal Mining OBOR OECD: Mining and mineral processing

  10. Bankruptcy Prediction with Rough Sets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); V. Popova (Viara)

    2001-01-01

    textabstractThe bankruptcy prediction problem can be considered an or dinal classification problem. The classical theory of Rough Sets describes objects by discrete attributes, and does not take into account the order- ing of the attributes values. This paper proposes a modification of the Rough Set

  11. Climate Prediction Center - monthly Outlook

    Science.gov (United States)

    Weather Service NWS logo - Click to go to the NWS home page Climate Prediction Center Site Map News Outlooks monthly Climate Outlooks Banner OFFICIAL Forecasts June 2018 [UPDATED MONTHLY FORECASTS SERVICE ) Canonical Correlation Analysis ECCA - Ensemble Canonical Correlation Analysis Optimal Climate Normals

  12. Climate Prediction Center - Site Index

    Science.gov (United States)

    Weather Service NWS logo - Click to go to the NWS home page Climate Prediction Center Home Site Map News Means Bulletins Annual Winter Stratospheric Ozone Climate Diagnostics Bulletin (Most Recent) Climate (Hazards Outlook) Climate Assessment: Dec. 1999-Feb. 2000 (Seasonal) Climate Assessment: Mar-May 2000

  13. Predictive medical information and underwriting.

    Science.gov (United States)

    Dodge, John H

    2007-01-01

    Medical underwriting involves the application of actuarial science by analyzing medical information to predict the future risk of a claim. The objective is that individuals with like risk are treated in a like manner so that the premium paid is proportional to the risk of future claim.

  14. Can Creativity Predict Cognitive Reserve?

    Science.gov (United States)

    Palmiero, Massimiliano; Di Giacomo, Dina; Passafiume, Domenico

    2016-01-01

    Cognitive reserve relies on the ability to effectively cope with aging and brain damage by using alternate processes to approach tasks when standard approaches are no longer available. In this study, the issue if creativity can predict cognitive reserve has been explored. Forty participants (mean age: 61 years) filled out: the Cognitive Reserve…

  15. A prediction for bubbling geometries

    OpenAIRE

    Okuda, Takuya

    2007-01-01

    We study the supersymmetric circular Wilson loops in N=4 Yang-Mills theory. Their vacuum expectation values are computed in the parameter region that admits smooth bubbling geometry duals. The results are a prediction for the supergravity action evaluated on the bubbling geometries for Wilson loops.

  16. Detecting failure of climate predictions

    Science.gov (United States)

    Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve

    2016-01-01

    The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.

  17. Predicting severity of paranoid schizophrenia

    OpenAIRE

    Kolesnichenko Elena Vladimirovna

    2015-01-01

    Clinical symptoms, course and outcomes of paranoid schizophrenia are polymorphic. 206 cases of paranoid schizophrenia were investigated. Clinical predictors were collected from hospital records and interviews. Quantitative assessment of the severity of schizophrenia as special indexes was used. Schizoid, epileptoid, psychasthenic and conformal accentuation of personality in the premorbid, early onset of psychosis, paranoid and hallucinatory-paranoid variants of onset predicted more expressed ...

  18. Predictability of Mobile Phone Associations

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Larsen, Jan; Hansen, Lars Kai

    2010-01-01

    Prediction and understanding of human behavior is of high importance in many modern applications and research areas ranging from context-aware services, wireless resource allocation to social sciences. In this study we collect a novel dataset using standard mobile phones and analyze how the predi...... representation, and general behavior. This is of vital interest in the development of context-aware services which rely on forecasting based on mobile phone sensors.......Prediction and understanding of human behavior is of high importance in many modern applications and research areas ranging from context-aware services, wireless resource allocation to social sciences. In this study we collect a novel dataset using standard mobile phones and analyze how...... the predictability of mobile sensors, acting as proxies for humans, change with time scale and sensor type such as GSM and WLAN. Applying recent information theoretic methods, it is demonstrated that an upper bound on predictability is relatively high for all sensors given the complete history (typically above 90...

  19. Numerical prediction of slamming loads

    DEFF Research Database (Denmark)

    Seng, Sopheak; Jensen, Jørgen J; Pedersen, Preben T

    2012-01-01

    It is important to include the contribution of the slamming-induced response in the structural design of large vessels with a significant bow flare. At the same time it is a challenge to develop rational tools to determine the slamming-induced loads and the prediction of their occurrence. Today i...

  20. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  1. Prediction of Malaysian monthly GDP

    Science.gov (United States)

    Hin, Pooi Ah; Ching, Soo Huei; Yeing, Pan Wei

    2015-12-01

    The paper attempts to use a method based on multivariate power-normal distribution to predict the Malaysian Gross Domestic Product next month. Letting r(t) be the vector consisting of the month-t values on m selected macroeconomic variables, and GDP, we model the month-(t+1) GDP to be dependent on the present and l-1 past values r(t), r(t-1),…,r(t-l+1) via a conditional distribution which is derived from a [(m+1)l+1]-dimensional power-normal distribution. The 100(α/2)% and 100(1-α/2)% points of the conditional distribution may be used to form an out-of sample prediction interval. This interval together with the mean of the conditional distribution may be used to predict the month-(t+1) GDP. The mean absolute percentage error (MAPE), estimated coverage probability and average length of the prediction interval are used as the criterions for selecting the suitable lag value l-1 and the subset from a pool of 17 macroeconomic variables. It is found that the relatively better models would be those of which 2 ≤ l ≤ 3, and involving one or two of the macroeconomic variables given by Market Indicative Yield, Oil Prices, Exchange Rate and Import Trade.

  2. Cast iron - a predictable material

    Directory of Open Access Journals (Sweden)

    Jorg C. Sturm

    2011-02-01

    Full Text Available High strength compacted graphite iron (CGI or alloyed cast iron components are substituting previously used non-ferrous castings in automotive power train applications. The mechanical engineering industry has recognized the value in substituting forged or welded structures with stiff and light-weight cast iron castings. New products such as wind turbines have opened new markets for an entire suite of highly reliable ductile iron cast components. During the last 20 years, casting process simulation has developed from predicting hot spots and solidification to an integral assessment tool for foundries for the entire manufacturing route of castings. The support of the feeding related layout of the casting is still one of the most important duties for casting process simulation. Depending on the alloy poured, different feeding behaviors and self-feeding capabilities need to be considered to provide a defect free casting. Therefore, it is not enough to base the prediction of shrinkage defects solely on hot spots derived from temperature fields. To be able to quantitatively predict these defects, solidification simulation had to be combined with density and mass transport calculations, in order to evaluate the impact of the solidification morphology on the feeding behavior as well as to consider alloy dependent feeding ranges. For cast iron foundries, the use of casting process simulation has become an important instrument to predict the robustness and reliability of their processes, especially since the influence of alloying elements, melting practice and metallurgy need to be considered to quantify the special shrinkage and solidification behavior of cast iron. This allows the prediction of local structures, phases and ultimately the local mechanical properties of cast irons, to asses casting quality in the foundry but also to make use of this quantitative information during design of the casting. Casting quality issues related to thermally driven

  3. HUMAN DECISIONS AND MACHINE PREDICTIONS.

    Science.gov (United States)

    Kleinberg, Jon; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil

    2018-02-01

    Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; and these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals. JEL Codes: C10 (Econometric and statistical methods and methodology), C55 (Large datasets: Modeling and analysis), K40 (Legal procedure, the legal system, and illegal behavior).

  4. Ocean eddies and climate predictability.

    Science.gov (United States)

    Kirtman, Ben P; Perlin, Natalie; Siqueira, Leo

    2017-12-01

    A suite of coupled climate model simulations and experiments are used to examine how resolved mesoscale ocean features affect aspects of climate variability, air-sea interactions, and predictability. In combination with control simulations, experiments with the interactive ensemble coupling strategy are used to further amplify the role of the oceanic mesoscale field and the associated air-sea feedbacks and predictability. The basic intent of the interactive ensemble coupling strategy is to reduce the atmospheric noise at the air-sea interface, allowing an assessment of how noise affects the variability, and in this case, it is also used to diagnose predictability from the perspective of signal-to-noise ratios. The climate variability is assessed from the perspective of sea surface temperature (SST) variance ratios, and it is shown that, unsurprisingly, mesoscale variability significantly increases SST variance. Perhaps surprising is the fact that the presence of mesoscale ocean features even further enhances the SST variance in the interactive ensemble simulation beyond what would be expected from simple linear arguments. Changes in the air-sea coupling between simulations are assessed using pointwise convective rainfall-SST and convective rainfall-SST tendency correlations and again emphasize how the oceanic mesoscale alters the local association between convective rainfall and SST. Understanding the possible relationships between the SST-forced signal and the weather noise is critically important in climate predictability. We use the interactive ensemble simulations to diagnose this relationship, and we find that the presence of mesoscale ocean features significantly enhances this link particularly in ocean eddy rich regions. Finally, we use signal-to-noise ratios to show that the ocean mesoscale activity increases model estimated predictability in terms of convective precipitation and atmospheric upper tropospheric circulation.

  5. Predicting steam generator crevice chemistry

    International Nuclear Information System (INIS)

    Burton, G.; Strati, G.

    2006-01-01

    'Full text:' Corrosion of steam cycle components produces insoluble material, mostly iron oxides, that are transported to the steam generator (SG) via the feedwater and deposited on internal surfaces such as the tubes, tube support plates and the tubesheet. The build up of these corrosion products over time can lead to regions of restricted flow with water chemistry that may be significantly different, and potentially more corrosive to SG tube material, than the bulk steam generator water chemistry. The aim of the present work is to predict SG crevice chemistry using experimentation and modelling as part of AECL's overall strategy for steam generator life management. Hideout-return experiments are performed under CANDU steam generator conditions to assess the accumulation of impurities in hideout, and return from, model crevices. The results are used to validate the ChemSolv model that predicts steam generator crevice impurity concentrations, and high temperature pH, based on process parameters (e.g., heat flux, primary side temperature) and blowdown water chemistry. The model has been incorporated into ChemAND, AECL's system health monitoring software for chemistry monitoring, analysis and diagnostics that has been installed at two domestic and one international CANDU station. ChemAND provides the station chemists with the only method to predict SG crevice chemistry. In one recent application, the software has been used to evaluate the crevice chemistry based on the elevated, but balanced, SG bulk water impurity concentrations present during reactor startup, in order to reduce hold times. The present paper will describe recent hideout-return experiments that are used for the validation of the ChemSolv model, station experience using the software, and improvements to predict the crevice electrochemical potential that will permit station staff to ensure that the SG tubes are in the 'safe operating zone' predicted by Lu (AECL). (author)

  6. Predicting outcome of status epilepticus.

    Science.gov (United States)

    Leitinger, M; Kalss, G; Rohracher, A; Pilz, G; Novak, H; Höfler, J; Deak, I; Kuchukhidze, G; Dobesberger, J; Wakonig, A; Trinka, E

    2015-08-01

    Status epilepticus (SE) is a frequent neurological emergency complicated by high mortality and often poor functional outcome in survivors. The aim of this study was to review available clinical scores to predict outcome. Literature review. PubMed Search terms were "score", "outcome", and "status epilepticus" (April 9th 2015). Publications with abstracts available in English, no other language restrictions, or any restrictions concerning investigated patients were included. Two scores were identified: "Status Epilepticus Severity Score--STESS" and "Epidemiology based Mortality score in SE--EMSE". A comprehensive comparison of test parameters concerning performance, options, and limitations was performed. Epidemiology based Mortality score in SE allows detailed individualization of risk factors and is significantly superior to STESS in a retrospective explorative study. In particular, EMSE is very good at detection of good and bad outcome, whereas STESS detecting bad outcome is limited by a ceiling effect and uncertainty of correct cutoff value. Epidemiology based Mortality score in SE can be adapted to different regions in the world and to advances in medicine, as new data emerge. In addition, we designed a reporting standard for status epilepticus to enhance acquisition and communication of outcome relevant data. A data acquisition sheet used from patient admission in emergency room, from the EEG lab to intensive care unit, is provided for optimized data collection. Status Epilepticus Severity Score is easy to perform and predicts bad outcome, but has a low predictive value for good outcomes. Epidemiology based Mortality score in SE is superior to STESS in predicting good or bad outcome but needs marginally more time to perform. Epidemiology based Mortality score in SE may prove very useful for risk stratification in interventional studies and is recommended for individual outcome prediction. Prospective validation in different cohorts is needed for EMSE, whereas

  7. Multiphase, multicomponent phase behavior prediction

    Science.gov (United States)

    Dadmohammadi, Younas

    Accurate prediction of phase behavior of fluid mixtures in the chemical industry is essential for designing and operating a multitude of processes. Reliable generalized predictions of phase equilibrium properties, such as pressure, temperature, and phase compositions offer an attractive alternative to costly and time consuming experimental measurements. The main purpose of this work was to assess the efficacy of recently generalized activity coefficient models based on binary experimental data to (a) predict binary and ternary vapor-liquid equilibrium systems, and (b) characterize liquid-liquid equilibrium systems. These studies were completed using a diverse binary VLE database consisting of 916 binary and 86 ternary systems involving 140 compounds belonging to 31 chemical classes. Specifically the following tasks were undertaken: First, a comprehensive assessment of the two common approaches (gamma-phi (gamma-ϕ) and phi-phi (ϕ-ϕ)) used for determining the phase behavior of vapor-liquid equilibrium systems is presented. Both the representation and predictive capabilities of these two approaches were examined, as delineated form internal and external consistency tests of 916 binary systems. For the purpose, the universal quasi-chemical (UNIQUAC) model and the Peng-Robinson (PR) equation of state (EOS) were used in this assessment. Second, the efficacy of recently developed generalized UNIQUAC and the nonrandom two-liquid (NRTL) for predicting multicomponent VLE systems were investigated. Third, the abilities of recently modified NRTL model (mNRTL2 and mNRTL1) to characterize liquid-liquid equilibria (LLE) phase conditions and attributes, including phase stability, miscibility, and consolute point coordinates, were assessed. The results of this work indicate that the ϕ-ϕ approach represents the binary VLE systems considered within three times the error of the gamma-ϕ approach. A similar trend was observed for the for the generalized model predictions using

  8. Branch prediction in the pentium family

    DEFF Research Database (Denmark)

    Fog, Agner

    1998-01-01

    How the branch prediction mechanism in the Pentium has been uncovered with all its quirks, and the incredibly more effective branch prediction in the later versions.......How the branch prediction mechanism in the Pentium has been uncovered with all its quirks, and the incredibly more effective branch prediction in the later versions....

  9. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  10. Semen analysis and prediction of natural conception

    NARCIS (Netherlands)

    Leushuis, Esther; van der Steeg, Jan Willem; Steures, Pieternel; Repping, Sjoerd; Bossuyt, Patrick M. M.; Mol, Ben Willem J.; Hompes, Peter G. A.; van der Veen, Fulco

    2014-01-01

    Do two semen analyses predict natural conception better than a single semen analysis and will adding the results of repeated semen analyses to a prediction model for natural pregnancy improve predictions? A second semen analysis does not add helpful information for predicting natural conception

  11. Time-Predictable Virtual Memory

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2016-01-01

    Virtual memory is an important feature of modern computer architectures. For hard real-time systems, memory protection is a particularly interesting feature of virtual memory. However, current memory management units are not designed for time-predictability and therefore cannot be used...... in such systems. This paper investigates the requirements on virtual memory from the perspective of hard real-time systems and presents the design of a time-predictable memory management unit. Our evaluation shows that the proposed design can be implemented efficiently. The design allows address translation...... and address range checking in constant time of two clock cycles on a cache miss. This constant time is in strong contrast to the possible cost of a miss in a translation look-aside buffer in traditional virtual memory organizations. Compared to a platform without a memory management unit, these two additional...

  12. Predicting responses from Rasch measures.

    Science.gov (United States)

    Linacre, John M

    2010-01-01

    There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.

  13. Prediction of dislocation boundary characteristics

    DEFF Research Database (Denmark)

    Winther, Grethe

    Plastic deformation of both fcc and bcc metals of medium to high stacking fault energy is known to result in dislocation patterning in the form of cells and extended planar dislocation boundaries. The latter align with specific crystallographic planes, which depend on the crystallographic......) and it is found that to a large extent the dislocations screen each other’s elastic stress fields [3]. The present contribution aims at advancing the previous theoretical analysis of a boundary on a known crystallographic plane to actual prediction of this plane as well as other boundary characteristics....... Crystal plasticity calculations combined with the hypothesis that these boundaries separate domains with local differences in the slip system activity are introduced to address precise prediction of the experimentally observed boundaries. The presentation will focus on two cases from fcc metals...

  14. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  15. [Predictive factors of anxiety disorders].

    Science.gov (United States)

    Domschke, K

    2014-10-01

    Anxiety disorders are among the most frequent mental disorders in Europe (12-month prevalence 14%) and impose a high socioeconomic burden. The pathogenesis of anxiety disorders is complex with an interaction of biological, environmental and psychosocial factors contributing to the overall disease risk (diathesis-stress model). In this article, risk factors for anxiety disorders will be presented on several levels, e.g. genetic factors, environmental factors, gene-environment interactions, epigenetic mechanisms, neuronal networks ("brain fear circuit"), psychophysiological factors (e.g. startle response and CO2 sensitivity) and dimensional/subclinical phenotypes of anxiety (e.g. anxiety sensitivity and behavioral inhibition), and critically discussed regarding their potential predictive value. The identification of factors predictive of anxiety disorders will possibly allow for effective preventive measures or early treatment interventions, respectively, and reduce the individual patient's suffering as well as the overall socioeconomic burden of anxiety disorders.

  16. Algorithms for Protein Structure Prediction

    DEFF Research Database (Denmark)

    Paluszewski, Martin

    -trace. Here we present three different approaches for reconstruction of C-traces from predictable measures. In our first approach [63, 62], the C-trace is positioned on a lattice and a tabu-search algorithm is applied to find minimum energy structures. The energy function is based on half-sphere-exposure (HSE......) is more robust than standard Monte Carlo search. In the second approach for reconstruction of C-traces, an exact branch and bound algorithm has been developed [67, 65]. The model is discrete and makes use of secondary structure predictions, HSE, CN and radius of gyration. We show how to compute good lower...... bounds for partial structures very fast. Using these lower bounds, we are able to find global minimum structures in a huge conformational space in reasonable time. We show that many of these global minimum structures are of good quality compared to the native structure. Our branch and bound algorithm...

  17. Antipredator defenses predict diversification rates

    Science.gov (United States)

    Arbuckle, Kevin; Speed, Michael P.

    2015-01-01

    The “escape-and-radiate” hypothesis predicts that antipredator defenses facilitate adaptive radiations by enabling escape from constraints of predation, diversified habitat use, and subsequently speciation. Animals have evolved diverse strategies to reduce the direct costs of predation, including cryptic coloration and behavior, chemical defenses, mimicry, and advertisement of unprofitability (conspicuous warning coloration). Whereas the survival consequences of these alternative defenses for individuals are well-studied, little attention has been given to the macroevolutionary consequences of alternative forms of defense. Here we show, using amphibians as the first, to our knowledge, large-scale empirical test in animals, that there are important macroevolutionary consequences of alternative defenses. However, the escape-and-radiate hypothesis does not adequately describe them, due to its exclusive focus on speciation. We examined how rates of speciation and extinction vary across defensive traits throughout amphibians. Lineages that use chemical defenses show higher rates of speciation as predicted by escape-and-radiate but also show higher rates of extinction compared with those without chemical defense. The effect of chemical defense is a net reduction in diversification compared with lineages without chemical defense. In contrast, acquisition of conspicuous coloration (often used as warning signals or in mimicry) is associated with heightened speciation rates but unchanged extinction rates. We conclude that predictions based on the escape-and-radiate hypothesis must incorporate the effect of traits on both speciation and extinction, which is rarely considered in such studies. Our results also suggest that knowledge of defensive traits could have a bearing on the predictability of extinction, perhaps especially important in globally threatened taxa such as amphibians. PMID:26483488

  18. Nonparametric predictive inference in reliability

    International Nuclear Information System (INIS)

    Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.

    2002-01-01

    We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere

  19. Shoulder Dystocia: Prediction and Management

    OpenAIRE

    Hill, Meghan G; Cohen, Wayne R

    2016-01-01

    Shoulder dystocia is a complication of vaginal delivery and the primary factor associated with brachial plexus injury. In this review, we discuss the risk factors for shoulder dystocia and propose a framework for the prediction and prevention of the complication. A recommended approach to management when shoulder dystocia occurs is outlined, with review of the maneuvers used to relieve the obstruction with minimal risk of fetal and maternal injury.

  20. Shoulder dystocia: prediction and management.

    Science.gov (United States)

    Hill, Meghan G; Cohen, Wayne R

    2016-01-01

    Shoulder dystocia is a complication of vaginal delivery and the primary factor associated with brachial plexus injury. In this review, we discuss the risk factors for shoulder dystocia and propose a framework for the prediction and prevention of the complication. A recommended approach to management when shoulder dystocia occurs is outlined, with review of the maneuvers used to relieve the obstruction with minimal risk of fetal and maternal injury.

  1. Black holes, singularities and predictability

    International Nuclear Information System (INIS)

    Wald, R.M.

    1984-01-01

    The paper favours the view that singularities may play a central role in quantum gravity. The author reviews the arguments leading to the conclusion, that in the process of black hole formation and evaporation, an initial pure state evolves to a final density matrix, thus signaling a breakdown in ordinary quantum dynamical evolution. Some related issues dealing with predictability in the dynamical evolution, are also discussed. (U.K.)

  2. Rainfall prediction with backpropagation method

    Science.gov (United States)

    Wahyuni, E. G.; Fauzan, L. M. F.; Abriyani, F.; Muchlis, N. F.; Ulfa, M.

    2018-03-01

    Rainfall is an important factor in many fields, such as aviation and agriculture. Although it has been assisted by technology but the accuracy can not reach 100% and there is still the possibility of error. Though current rainfall prediction information is needed in various fields, such as agriculture and aviation fields. In the field of agriculture, to obtain abundant and quality yields, farmers are very dependent on weather conditions, especially rainfall. Rainfall is one of the factors that affect the safety of aircraft. To overcome the problems above, then it’s required a system that can accurately predict rainfall. In predicting rainfall, artificial neural network modeling is applied in this research. The method used in modeling this artificial neural network is backpropagation method. Backpropagation methods can result in better performance in repetitive exercises. This means that the weight of the ANN interconnection can approach the weight it should be. Another advantage of this method is the ability in the learning process adaptively and multilayer owned on this method there is a process of weight changes so as to minimize error (fault tolerance). Therefore, this method can guarantee good system resilience and consistently work well. The network is designed using 4 input variables, namely air temperature, air humidity, wind speed, and sunshine duration and 3 output variables ie low rainfall, medium rainfall, and high rainfall. Based on the research that has been done, the network can be used properly, as evidenced by the results of the prediction of the system precipitation is the same as the results of manual calculations.

  3. The Clinical Prediction of Dangerousness.

    Science.gov (United States)

    1985-05-01

    8217 8 ings. Szasz (1963) has argued persuasively that clinical predictions of future dangerous behavior are unfairly focused on the mentally ill...Persons labeled paranoid, Szasz states, are readily commitable, while highly dangerous drunken drivers are not. Indeed, dangerousness such as that...Psychology, 31, 492-494. Szasz , T. (1963). Law, liberty and psychiatry. New York: Macmillan. Taft, R. (1955). The ability to judge people. Psychological

  4. Dim prospects for earthquake prediction

    Science.gov (United States)

    Geller, Robert J.

    I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

  5. Are Some Semantic Changes Predictable?

    DEFF Research Database (Denmark)

    Schousboe, Steen

    2010-01-01

      Historical linguistics is traditionally concerned with phonology and syntax. With the exception of grammaticalization - the development of auxiliary verbs, the syntactic rather than localistic use of prepositions, etc. - semantic change has usually not been described as a result of regular...... developments, but only as specific meaning changes in individual words. This paper will suggest some regularities in semantic change, regularities which, like sound laws, have predictive power and can be tested against recorded languages....

  6. Butterfly valve torque prediction methodology

    International Nuclear Information System (INIS)

    Eldiwany, B.H.; Sharma, V.; Kalsi, M.S.; Wolfe, K.

    1994-01-01

    As part of the Motor-Operated Valve (MOV) Performance Prediction Program, the Electric Power Research Institute has sponsored the development of methodologies for predicting thrust and torque requirements of gate, globe, and butterfly MOVs. This paper presents the methodology that will be used by utilities to calculate the dynamic torque requirements for butterfly valves. The total dynamic torque at any disc position is the sum of the hydrodynamic torque, bearing torque (which is induced by the hydrodynamic force), as well as other small torque components (such as packing torque). The hydrodynamic torque on the valve disc, caused by the fluid flow through the valve, depends on the disc angle, flow velocity, upstream flow disturbances, disc shape, and the disc aspect ratio. The butterfly valve model provides sets of nondimensional flow and torque coefficients that can be used to predict flow rate and hydrodynamic torque throughout the disc stroke and to calculate the required actuation torque and the maximum transmitted torque throughout the opening and closing stroke. The scope of the model includes symmetric and nonsymmetric discs of different shapes and aspects ratios in compressible and incompressible fluid applications under both choked and nonchoked flow conditions. The model features were validated against test data from a comprehensive flowloop and in situ test program. These tests were designed to systematically address the effect of the following parameters on the required torque: valve size, disc shapes and disc aspect ratios, upstream elbow orientation and its proximity, and flow conditions. The applicability of the nondimensional coefficients to valves of different sizes was validated by performing tests on 42-in. valve and a precisely scaled 6-in. model. The butterfly valve model torque predictions were found to bound test data from the flow-loop and in situ testing, as shown in the examples provided in this paper

  7. Prediction of future asset prices

    Science.gov (United States)

    Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei

    2014-12-01

    This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.

  8. Evoked Emotions Predict Food Choice

    OpenAIRE

    Dalenberg, Jelle R.; Gutjar, Swetlana; ter Horst, Gert J.; de Graaf, Kees; Renken, Remco J.; Jager, Gerry

    2014-01-01

    In the current study we show that non-verbal food-evoked emotion scores significantly improve food choice prediction over merely liking scores. Previous research has shown that liking measures correlate with choice. However, liking is no strong predictor for food choice in real life environments. Therefore, the focus within recent studies shifted towards using emotion-profiling methods that successfully can discriminate between products that are equally liked. However, it is unclear how well ...

  9. Predictive Analytics in Information Systems Research

    OpenAIRE

    Shmueli, Galit; Koppius, Otto

    2011-01-01

    textabstractThis research essay highlights the need to integrate predictive analytics into information systems research and shows several concrete ways in which this goal can be accomplished. Predictive analytics include empirical methods (statistical and other) that generate data predictions as well as methods for assessing predictive power. Predictive analytics not only assist in creating practically useful models, they also play an important role alongside explanatory modeling in theory bu...

  10. Seizure Prediction and its Applications

    Science.gov (United States)

    Iasemidis, Leon D.

    2011-01-01

    Epilepsy is characterized by intermittent, paroxysmal, hypersynchronous electrical activity, that may remain localized and/or spread and severely disrupt the brain’s normal multi-task and multi-processing function. Epileptic seizures are the hallmarks of such activity and had been considered unpredictable. It is only recently that research on the dynamics of seizure generation by analysis of the brain’s electrographic activity (EEG) has shed ample light on the predictability of seizures, and illuminated the way to automatic, prospective, long-term prediction of seizures. The ability to issue warnings in real time of impending seizures (e.g., tens of minutes prior to seizure occurrence in the case of focal epilepsy), may lead to novel diagnostic tools and treatments for epilepsy. Applications may range from a simple warning to the patient, in order to avert seizure-associated injuries, to intervention by automatic timely administration of an appropriate stimulus, for example of a chemical nature like an anti-epileptic drug (AED), electromagnetic nature like vagus nerve stimulation (VNS), deep brain stimulation (DBS), transcranial direct current (TDC) or transcranial magnetic stimulation (TMS), and/or of another nature (e.g., ultrasonic, cryogenic, biofeedback operant conditioning). It is thus expected that seizure prediction could readily become an integral part of the treatment of epilepsy through neuromodulation, especially in the new generation of closed-loop seizure control systems. PMID:21939848

  11. Prediction During Natural Language Comprehension.

    Science.gov (United States)

    Willems, Roel M; Frank, Stefan L; Nijhof, Annabel D; Hagoort, Peter; van den Bosch, Antal

    2016-06-01

    The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investigated the neural basis of 2 distinct aspects of word prediction, derived from information theory, during story comprehension. We assessed the effect of entropy of next-word probability distributions as well as surprisal A computational model determined entropy and surprisal for each word in 3 literary stories. Twenty-four healthy participants listened to the same 3 stories while their brain activation was measured using fMRI. Reversed speech fragments were presented as a control condition. Brain areas sensitive to entropy were left ventral premotor cortex, left middle frontal gyrus, right inferior frontal gyrus, left inferior parietal lobule, and left supplementary motor area. Areas sensitive to surprisal were left inferior temporal sulcus ("visual word form area"), bilateral superior temporal gyrus, right amygdala, bilateral anterior temporal poles, and right inferior frontal sulcus. We conclude that prediction during language comprehension can occur at several levels of processing, including at the level of word form. Our study exemplifies the power of combining computational linguistics with cognitive neuroscience, and additionally underlines the feasibility of studying continuous spoken language materials with fMRI. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Prediction of Chevrel superconducting phases

    International Nuclear Information System (INIS)

    Savitskij, E.M.; Kiseleva, N.N.

    1978-01-01

    Made is an attempt of predicting the possibility of formation of compounds of Mo 3 Se 4 type structure having critical temperatures of transition into superconducting state more than 4.2 K. Cybernetic method of teaching an electronic computer to form notions is used for prediction. Prediction system constructs logic dependence of forming Chevrel superconducting phase of the Asub(x)Bsub(6)Ssub(8) composition (A being an element of the periodic system; B=Cr, Mo, W, Re) and Asub(x)Bsub(6)Ssub(8) compounds having a critical temperature of more than 4.2 K on the properties of A and B elements. A conclusion is made that W, Re, Cr do not form Chevrel phases of the Asub(x)Bsub(6)Ssub(8) composition as B component. Be, Hg, Ra, B, Ac are the reserve for obtaining Asub(x)Mosub(6)Ssub(8) phases. Agsub(x)Mosub(6)Ssub(8) compound may have a high critical temperature. The ways of a critical temperature increase for Chevrel phases are connected with the search of optimal technological conditions for already known superconducting compounds and also with introduction of impurities fixing a distance between sulfur cubes

  13. Childhood asthma-predictive phenotype.

    Science.gov (United States)

    Guilbert, Theresa W; Mauger, David T; Lemanske, Robert F

    2014-01-01

    Wheezing is a fairly common symptom in early childhood, but only some of these toddlers will experience continued wheezing symptoms in later childhood. The definition of the asthma-predictive phenotype is in children with frequent, recurrent wheezing in early life who have risk factors associated with the continuation of asthma symptoms in later life. Several asthma-predictive phenotypes were developed retrospectively based on large, longitudinal cohort studies; however, it can be difficult to differentiate these phenotypes clinically as the expression of symptoms, and risk factors can change with time. Genetic, environmental, developmental, and host factors and their interactions may contribute to the development, severity, and persistence of the asthma phenotype over time. Key characteristics that distinguish the childhood asthma-predictive phenotype include the following: male sex; a history of wheezing, with lower respiratory tract infections; history of parental asthma; history of atopic dermatitis; eosinophilia; early sensitization to food or aeroallergens; or lower lung function in early life. Copyright © 2014 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  14. Global predictability of temperature extremes

    Science.gov (United States)

    Coughlan de Perez, Erin; van Aalst, Maarten; Bischiniotis, Konstantinos; Mason, Simon; Nissan, Hannah; Pappenberger, Florian; Stephens, Elisabeth; Zsoter, Ervin; van den Hurk, Bart

    2018-05-01

    Extreme temperatures are one of the leading causes of death and disease in both developed and developing countries, and heat extremes are projected to rise in many regions. To reduce risk, heatwave plans and cold weather plans have been effectively implemented around the world. However, much of the world’s population is not yet protected by such systems, including many data-scarce but also highly vulnerable regions. In this study, we assess at a global level where such systems have the potential to be effective at reducing risk from temperature extremes, characterizing (1) long-term average occurrence of heatwaves and coldwaves, (2) seasonality of these extremes, and (3) short-term predictability of these extreme events three to ten days in advance. Using both the NOAA and ECMWF weather forecast models, we develop global maps indicating a first approximation of the locations that are likely to benefit from the development of seasonal preparedness plans and/or short-term early warning systems for extreme temperature. The extratropics generally show both short-term skill as well as strong seasonality; in the tropics, most locations do also demonstrate one or both. In fact, almost 5 billion people live in regions that have seasonality and predictability of heatwaves and/or coldwaves. Climate adaptation investments in these regions can take advantage of seasonality and predictability to reduce risks to vulnerable populations.

  15. Prediction of the wear and evolution of cutting tools in a carbide / titanium-aluminum-vanadium machining tribosystem by volumetric tool wear characterization and modeling

    Science.gov (United States)

    Kuttolamadom, Mathew Abraham

    being carried away by the rubbing action of the chips -- this left behind a smooth crater surface predominantly of tungsten and cobalt as observed from EDS analysis. Also, at high surface speeds, carbon from the tool was found diffused into the adhered titanium layer to form a titanium carbide (TiC) boundary layer -- this was observed as instances of TiC build-up on the tool edge from EDS analysis. A complex wear mechanism interaction was thus observed, i.e., titanium adhered on top of an earlier worn out crater trough, additional carbon diffused into this adhered titanium layer to create a more stable boundary layer (which could limit diffusion-rates on saturation), and then all were further worn away by dissolution wear as temperatures increased. At low and medium feeds, notch discoloration was observed -- this was detected to be carbon from EDS analysis, suggesting that it was deposited from the edges of the passing chips. Mapping the dominant wear mechanisms showed the increasing dominance of dissolution wear relative to adhesion, with increasing grain size -- this is because a 13% larger sub-micron grain results in a larger surface area of cobalt exposed to chemical action. On the macro-scale, wear quantification through topology characterization elevated wear from a 1D to 3D concept. From investigation, a second order dependence of volumetric tool wear (VTW) and VTW rate with the material removal rate (MRR) emerged, suggesting that MRR is a more consistent wear-controlling factor instead of the traditionally used cutting speed. A predictive model for VTW was developed which showed its exponential dependence with workpiece stock volume removed. Also, both VTW and VTW rate were found to be dependent on the accumulated cumulative wear on the tool. Further, a ratio metric of stock material removed to tool volume lost is now possible as a tool efficiency quantifier and energy-based productivity parameter, which was found to inversely depend on MRR - this led to a more

  16. Wine Expertise Predicts Taste Phenotype.

    Science.gov (United States)

    Hayes, John E; Pickering, Gary J

    2012-03-01

    Taste phenotypes have long been studied in relation to alcohol intake, dependence, and family history, with contradictory findings. However, on balance - with appropriate caveats about populations tested, outcomes measured and psychophysical methods used - an association between variation in taste responsiveness and some alcohol behaviors is supported. Recent work suggests super-tasting (operationalized via propylthiouracil (PROP) bitterness) not only associates with heightened response but also with more acute discrimination between stimuli. Here, we explore relationships between food and beverage adventurousness and taste phenotype. A convenience sample of wine drinkers (n=330) were recruited in Ontario and phenotyped for PROP bitterness via filter paper disk. They also filled out a short questionnaire regarding willingness to try new foods, alcoholic beverages and wines as well as level of wine involvement, which was used to classify them as a wine expert (n=110) or wine consumer (n=220). In univariate logisitic models, food adventurousness predicted trying new wines and beverages but not expertise. Likewise, wine expertise predicted willingness to try new wines and beverages but not foods. In separate multivariate logistic models, willingness to try new wines and beverages was predicted by expertise and food adventurousness but not PROP. However, mean PROP bitterness was higher among wine experts than wine consumers, and the conditional distribution functions differed between experts and consumers. In contrast, PROP means and distributions did not differ with food adventurousness. These data suggest individuals may self-select for specific professions based on sensory ability (i.e., an active gene-environment correlation) but phenotype does not explain willingness to try new stimuli.

  17. Predicting mortality from human faces.

    Science.gov (United States)

    Dykiert, Dominika; Bates, Timothy C; Gow, Alan J; Penke, Lars; Starr, John M; Deary, Ian J

    2012-01-01

    To investigate whether and to what extent mortality is predictable from facial photographs of older people. High-quality facial photographs of 292 members of the Lothian Birth Cohort 1921, taken at the age of about 83 years, were rated in terms of apparent age, health, attractiveness, facial symmetry, intelligence, and well-being by 12 young-adult raters. Cox proportional hazards regression was used to study associations between these ratings and mortality during a 7-year follow-up period. All ratings had adequate reliability. Concurrent validity was found for facial symmetry and intelligence (as determined by correlations with actual measures of fluctuating asymmetry in the faces and Raven Standard Progressive Matrices score, respectively), but not for the other traits. Age as rated from facial photographs, adjusted for sex and chronological age, was a significant predictor of mortality (hazard ratio = 1.36, 95% confidence interval = 1.12-1.65) and remained significant even after controlling for concurrent, objectively measured health and cognitive ability, and the other ratings. Health as rated from facial photographs, adjusted for sex and chronological age, significantly predicted mortality (hazard ratio = 0.81, 95% confidence interval = 0.67-0.99) but not after adjusting for rated age or objectively measured health and cognition. Rated attractiveness, symmetry, intelligence, and well-being were not significantly associated with mortality risk. Rated age of the face is a significant predictor of mortality risk among older people, with predictive value over and above that of objective or rated health status and cognitive ability.

  18. Developmental dyslexia: predicting individual risk.

    Science.gov (United States)

    Thompson, Paul A; Hulme, Charles; Nash, Hannah M; Gooch, Debbie; Hayiou-Thomas, Emma; Snowling, Margaret J

    2015-09-01

    Causal theories of dyslexia suggest that it is a heritable disorder, which is the outcome of multiple risk factors. However, whether early screening for dyslexia is viable is not yet known. The study followed children at high risk of dyslexia from preschool through the early primary years assessing them from age 3 years and 6 months (T1) at approximately annual intervals on tasks tapping cognitive, language, and executive-motor skills. The children were recruited to three groups: children at family risk of dyslexia, children with concerns regarding speech, and language development at 3;06 years and controls considered to be typically developing. At 8 years, children were classified as 'dyslexic' or not. Logistic regression models were used to predict the individual risk of dyslexia and to investigate how risk factors accumulate to predict poor literacy outcomes. Family-risk status was a stronger predictor of dyslexia at 8 years than low language in preschool. Additional predictors in the preschool years include letter knowledge, phonological awareness, rapid automatized naming, and executive skills. At the time of school entry, language skills become significant predictors, and motor skills add a small but significant increase to the prediction probability. We present classification accuracy using different probability cutoffs for logistic regression models and ROC curves to highlight the accumulation of risk factors at the individual level. Dyslexia is the outcome of multiple risk factors and children with language difficulties at school entry are at high risk. Family history of dyslexia is a predictor of literacy outcome from the preschool years. However, screening does not reach an acceptable clinical level until close to school entry when letter knowledge, phonological awareness, and RAN, rather than family risk, together provide good sensitivity and specificity as a screening battery. © 2015 The Authors. Journal of Child Psychology and Psychiatry published by

  19. Prediction and imitation in speech

    Directory of Open Access Journals (Sweden)

    Chiara eGambi

    2013-06-01

    Full Text Available It has been suggested that intra- and inter-speaker variability in speech are correlated. Interlocutors have been shown to converge on various phonetic dimensions. In addition, speakers imitate the phonetic properties of voices they are exposed to in shadowing, repetition, and even passive listening tasks. We review three theoretical accounts of speech imitation and convergence phenomena: (i the Episodic Theory (ET of speech perception and production (Goldinger, 1998; (ii the Motor Theory (MT of speech perception (Liberman and Whalen, 2000;Galantucci et al., 2006 ; (iii Communication Accommodation Theory (CAT; Giles et al., 1991;Giles and Coupland, 1991. We argue that no account is able to explain all the available evidence. In particular, there is a need to integrate low-level, mechanistic accounts (like ET and MT and higher-level accounts (like CAT. We propose that this is possible within the framework of an integrated theory of production and comprehension (Pickering & Garrod, in press. Similarly to both ET and MT, this theory assumes parity between production and perception. Uniquely, however, it posits that listeners simulate speakers’ utterances by computing forward-model predictions at many different levels, which are then compared to the incoming phonetic input. In our account phonetic imitation can be achieved via the same mechanism that is responsible for sensorimotor adaptation; i.e. the correction of prediction errors. In addition, the model assumes that the degree to which sensory prediction errors lead to motor adjustments is context-dependent. The notion of context subsumes both the preceding linguistic input and non-linguistic attributes of the situation (e.g., the speaker’s and listener’s social identities, their conversational roles, the listener’s intention to imitate.

  20. Comprehensive update of the atomic mass predictions

    International Nuclear Information System (INIS)

    Haustein, P.E.

    1987-01-01

    A project has been completed recently for a comprehensive update of atomic mass predictions. This last occurred in 1976. Over the last 10 years the reliability of these earlier predictions (and others published later) has been analyzed by comparisons of the predictions with new masses from isotopes that were not in the experimental data base when the predictions were prepared. This analysis has highlighted distinct systematic features in various models which frequently result in poor predictions for nuclei that lie far from stability. An overview of the new predictions from models with different theoretical approaches will be presented

  1. Learning to Predict Chemical Reactions

    Science.gov (United States)

    Kayala, Matthew A.; Azencott, Chloé-Agathe; Chen, Jonathan H.

    2011-01-01

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles respectively are not high-throughput, are not generalizable or scalable, or lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry dataset consisting of 1630 full multi-step reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval, problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of non-productive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system

  2. Learning to predict chemical reactions.

    Science.gov (United States)

    Kayala, Matthew A; Azencott, Chloé-Agathe; Chen, Jonathan H; Baldi, Pierre

    2011-09-26

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles, respectively, are not high throughput, are not generalizable or scalable, and lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry data set consisting of 1630 full multistep reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top-ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of nonproductive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system

  3. Is quantum theory predictably complete?

    Energy Technology Data Exchange (ETDEWEB)

    Kupczynski, M [Department of Mathematics and Statistics, University of Ottawa, 585 King-Edward Avenue, Ottawa, Ontario K1N 6N5 (Canada); Departement de l' Informatique, UQO, Case postale 1250, succursale Hull, Gatineau, Quebec J8X 3X 7 (Canada)], E-mail: mkupczyn@uottawa.ca

    2009-07-15

    Quantum theory (QT) provides statistical predictions for various physical phenomena. To verify these predictions a considerable amount of data has been accumulated in the 'measurements' performed on the ensembles of identically prepared physical systems or in the repeated 'measurements' on some trapped 'individual physical systems'. The outcomes of these measurements are, in general, some numerical time series registered by some macroscopic instruments. The various empirical probability distributions extracted from these time series were shown to be consistent with the probabilistic predictions of QT. More than 70 years ago the claim was made that QT provided the most complete description of 'individual' physical systems and outcomes of the measurements performed on 'individual' physical systems were obtained in an intrinsically random way. Spin polarization correlation experiments (SPCEs), performed to test the validity of Bell inequalities, clearly demonstrated the existence of strong long-range correlations and confirmed that the beams hitting far away detectors somehow preserve the memory of their common source which would be destroyed if the individual counts of far away detectors were purely random. Since the probabilities describe the random experiments and are not the attributes of the 'individual' physical systems, the claim that QT provides a complete description of 'individual' physical systems seems not only unjustified but also misleading and counter productive. In this paper, we point out that we even do not know whether QT is predictably complete because it has not been tested carefully enough. Namely, it was not proven that the time series of existing experimental data did not contain some stochastic fine structures that could have been averaged out by describing them in terms of the empirical probability distributions. In this paper, we advocate various statistical tests that

  4. Focus on astronomical predictable events

    DEFF Research Database (Denmark)

    Jacobsen, Aase Roland

    2006-01-01

    At the Steno Museum Planetarium we have for many occasions used a countdown clock to get focus om astronomical events. A countdown clock can provide actuality to predictable events, for example The Venus Transit, Opportunity landing on Mars and The Solar Eclipse. The movement of the clock attracs...... the public and makes a point of interest in a small exhibit area. A countdown clock can be simple, but it is possible to expand the concept to an eye-catching part of a museum....

  5. Making predictions in the multiverse

    International Nuclear Information System (INIS)

    Freivogel, Ben

    2011-01-01

    I describe reasons to think we are living in an eternally inflating multiverse where the observable 'constants' of nature vary from place to place. The major obstacle to making predictions in this context is that we must regulate the infinities of eternal inflation. I review a number of proposed regulators, or measures. Recent work has ruled out a number of measures by showing that they conflict with observation, and focused attention on a few proposals. Further, several different measures have been shown to be equivalent. I describe some of the many nontrivial tests these measures will face as we learn more from theory, experiment and observation.

  6. Making predictions in the multiverse

    Energy Technology Data Exchange (ETDEWEB)

    Freivogel, Ben, E-mail: benfreivogel@gmail.com [Center for Theoretical Physics and Laboratory for Nuclear Science, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2011-10-21

    I describe reasons to think we are living in an eternally inflating multiverse where the observable 'constants' of nature vary from place to place. The major obstacle to making predictions in this context is that we must regulate the infinities of eternal inflation. I review a number of proposed regulators, or measures. Recent work has ruled out a number of measures by showing that they conflict with observation, and focused attention on a few proposals. Further, several different measures have been shown to be equivalent. I describe some of the many nontrivial tests these measures will face as we learn more from theory, experiment and observation.

  7. Flooding Fragility Experiments and Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tahhan, Antonio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Muchmore, Cody [Idaho National Lab. (INL), Idaho Falls, ID (United States); Nichols, Larinda [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bhandari, Bishwo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pope, Chad [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    This report describes the work that has been performed on flooding fragility, both the experimental tests being carried out and the probabilistic fragility predictive models being produced in order to use the text results. Flooding experiments involving full-scale doors have commenced in the Portal Evaluation Tank. The goal of these experiments is to develop a full-scale component flooding experiment protocol and to acquire data that can be used to create Bayesian regression models representing the fragility of these components. This work is in support of the Risk-Informed Safety Margin Characterization (RISMC) Pathway external hazards evaluation research and development.

  8. Mach's predictions and relativistic cosmology

    International Nuclear Information System (INIS)

    Heller, M.

    1989-01-01

    Deep methodological insight of Ernst Mach into the structure of the Newtonian mechanics allowed him to ask questions, the importance of which can be appreciated only today. Three such Mach's ''predictions'' are briefly presented, namely: the possibility of the existence of an allpervading medium which could serve as an universal frame of reference and which has actually been discovered in the form of the microwave background radiation, a certain ''smoothness'' of the Universe which is now recognized as the Robertson-Walker symmetries and the possibility of the experimental verification of the mass anisotropy. 11 refs. (author)

  9. Zephyr - the next generation prediction

    DEFF Research Database (Denmark)

    Giebel, G.; Landberg, L.; Nielsen, Torben Skov

    2001-01-01

    Technical University. This paper will describe a new project funded by the Danish Ministry of Energy where the largest Danish utilities (Elkraft, Elsam, Eltra and SEAS) are participating. Two advantages can be achieved by combining the effort: The software architecture will be state-of-the-art, using...... the Java2TM platform and Enterprise Java Beans technology, and it will ensure that the best forecasts are given on all prediction horizons from the short range (0-9 hours) to the long range (36-48 hours). This is because the IMM approach uses online data and advanced statistical methods, which...

  10. Aviation turbulence processes, detection, prediction

    CERN Document Server

    Lane, Todd

    2016-01-01

    Anyone who has experienced turbulence in flight knows that it is usually not pleasant, and may wonder why this is so difficult to avoid. The book includes papers by various aviation turbulence researchers and provides background into the nature and causes of atmospheric turbulence that affect aircraft motion, and contains surveys of the latest techniques for remote and in situ sensing and forecasting of the turbulence phenomenon. It provides updates on the state-of-the-art research since earlier studies in the 1960s on clear-air turbulence, explains recent new understanding into turbulence generation by thunderstorms, and summarizes future challenges in turbulence prediction and avoidance.

  11. Prediction of burnout. Chapter 14

    International Nuclear Information System (INIS)

    Lee, D.H.

    1977-01-01

    A broad survey is made of the effect on burnout heat flux of various system parameters to give the reader a better initial idea of the significance of changes in individual parameters. A detailed survey is then made of various correlation equations for predicting burnout for steam -water in uniformly heated tubes, annuli, rectangular channels and rod clusters, giving details of recommended equations. Finally comments are made on the influence of heat-flux profile and swirl flow on burnout, and on the definition of dryout margin. (author)

  12. Predicting word sense annotation agreement

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector; Johannsen, Anders Trærup; Lopez de Lacalle, Oier

    2015-01-01

    High agreement is a common objective when annotating data for word senses. However, a number of factors make perfect agreement impossible, e.g. the limitations of the sense inventories, the difficulty of the examples or the interpretation preferences of the annotations. Estimating potential...... agreement is thus a relevant task to supplement the evaluation of sense annotations. In this article we propose two methods to predict agreement on word-annotation instances. We experiment with a continuous representation and a three-way discretization of observed agreement. In spite of the difficulty...

  13. Mechanism and prediction of burnout

    International Nuclear Information System (INIS)

    Hewitt, G.F.

    1977-01-01

    The lecture begins by discussing the definitions of burnout and the various parametric effects as seen from the results for burnout measurements in uniformly heated round tubes. The correlations which are developed from these measurements and their applications to the case of non-uniform axial distribution of heat flux is then discussed in general terms as an illustration of the importance of knowing more about the nature and mechanism of the burnout. The next section of the lecture is concerned with summarizing broadly the various possible mechanisms in both the sub-cooled region and the quality region. It transpires that, for tubes of reasonable length, the normal first occurrence of burnout is in the annular flow regime. A discussion of burnout mechanisms in this regime then follows, with descriptions of the various experimental techniques evolved to study the mechanism. The final section of the lecture is concerned with prediction methods for burnout in annular flow and the application of these methods to prediction of burnout in round tubes, annuli and rod bundles, with a variety of fluids

  14. On predicting monitoring system effectiveness

    Science.gov (United States)

    Cappello, Carlo; Sigurdardottir, Dorotea; Glisic, Branko; Zonta, Daniele; Pozzi, Matteo

    2015-03-01

    While the objective of structural design is to achieve stability with an appropriate level of reliability, the design of systems for structural health monitoring is performed to identify a configuration that enables acquisition of data with an appropriate level of accuracy in order to understand the performance of a structure or its condition state. However, a rational standardized approach for monitoring system design is not fully available. Hence, when engineers design a monitoring system, their approach is often heuristic with performance evaluation based on experience, rather than on quantitative analysis. In this contribution, we propose a probabilistic model for the estimation of monitoring system effectiveness based on information available in prior condition, i.e. before acquiring empirical data. The presented model is developed considering the analogy between structural design and monitoring system design. We assume that the effectiveness can be evaluated based on the prediction of the posterior variance or covariance matrix of the state parameters, which we assume to be defined in a continuous space. Since the empirical measurements are not available in prior condition, the estimation of the posterior variance or covariance matrix is performed considering the measurements as a stochastic variable. Moreover, the model takes into account the effects of nuisance parameters, which are stochastic parameters that affect the observations but cannot be estimated using monitoring data. Finally, we present an application of the proposed model to a real structure. The results show how the model enables engineers to predict whether a sensor configuration satisfies the required performance.

  15. Prediction Reweighting for Domain Adaptation.

    Science.gov (United States)

    Shuang Li; Shiji Song; Gao Huang

    2017-07-01

    There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.

  16. Insertion, Validation, and Application of Barotropic and Baroclinic Tides in 1/12 and 1/25 Degree Global HYCOM

    Science.gov (United States)

    2013-09-30

    implications for the development of the proposed wide-swath satellite altimeter (NASA/CNES SWOT mission). Three-dimensional maps of internal-wave driven...planned wide-swath satellite altimeter mission ( SWOT ). 4 --Conrad Luecke, graduate student in the UM Department of Earth and Environmental Sciences...harmonic analysis . If instead they are mostly non-stationary, then harmonic analysis will not suffice. In Figure 2 we display the non-stationarity as

  17. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  18. The Use of Linear Programming for Prediction.

    Science.gov (United States)

    Schnittjer, Carl J.

    The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)

  19. Profit Driven Decision Trees for Churn Prediction

    OpenAIRE

    Höppner, Sebastiaan; Stripling, Eugen; Baesens, Bart; Broucke, Seppe vanden; Verdonck, Tim

    2017-01-01

    Customer retention campaigns increasingly rely on predictive models to detect potential churners in a vast customer base. From the perspective of machine learning, the task of predicting customer churn can be presented as a binary classification problem. Using data on historic behavior, classification algorithms are built with the purpose of accurately predicting the probability of a customer defecting. The predictive churn models are then commonly selected based on accuracy related performan...

  20. Robust predictions of the interacting boson model

    International Nuclear Information System (INIS)

    Casten, R.F.; Koeln Univ.

    1994-01-01

    While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data

  1. Phenology prediction component of GypsES

    Science.gov (United States)

    Jesse A. Logan; Lukas P. Schaub; F. William Ravlin

    1991-01-01

    Prediction of phenology is an important component of most pest management programs, and considerable research effort has been expended toward development of predictive tools for gypsy moth phenology. Although phenological prediction is potentially valuable for timing of spray applications (e.g. Bt, or Gypcheck) and other management activities (e.g. placement and...

  2. Climate Prediction Center - The ENSO Cycle

    Science.gov (United States)

    Weather Service NWS logo - Click to go to the NWS home page Climate Prediction Center Home Site Map News Web resources and services. HOME > El Niño/La Niña > The ENSO Cycle ENSO Cycle Banner Climate for Weather and Climate Prediction Climate Prediction Center 5830 University Research Court College

  3. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  4. Relationship between water temperature predictability and aquatic ...

    African Journals Online (AJOL)

    Macroinvertebrate taxonomic turnover across seasons was higher for sites having lower water temperature predictability values than for sites with higher predictability, while temporal partitioning was greater at sites with greater temperature variability. Macroinvertebrate taxa responded in a predictable manner to changes in ...

  5. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  6. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    OpenAIRE

    Jerzy Balicki; Piotr Dryja; Waldemar Korłub; Piotr Przybyłek; Maciej Tyszka; Marcin Zadroga; Marcin Zakidalski

    2016-01-01

    Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  7. NEURAL METHODS FOR THE FINANCIAL PREDICTION

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2016-06-01

    Full Text Available Artificial neural networks can be used to predict share investment on the stock market, assess the reliability of credit client or predicting banking crises. Moreover, this paper discusses the principles of cooperation neural network algorithms with evolutionary method, and support vector machines. In addition, a reference is made to other methods of artificial intelligence, which are used in finance prediction.

  8. Applications for predictive microbiology to food packaging

    Science.gov (United States)

    Predictive microbiology has been used for several years in the food industry to predict microbial growth, inactivation and survival. Predictive models provide a useful tool in risk assessment, HACCP set-up and GMP for the food industry to enhance microbial food safety. This report introduces the c...

  9. Predictive Analytics in Information Systems Research

    NARCIS (Netherlands)

    G. Shmueli (Galit); O.R. Koppius (Otto)

    2011-01-01

    textabstractThis research essay highlights the need to integrate predictive analytics into information systems research and shows several concrete ways in which this goal can be accomplished. Predictive analytics include empirical methods (statistical and other) that generate data predictions as

  10. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  11. Pretest Predictions for Ventilation Tests

    International Nuclear Information System (INIS)

    Y. Sun; H. Yang; H.N. Kalia

    2007-01-01

    The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, concrete pipe walls, and insulation that will be developed during the ventilation tests involving various test conditions. The results will be used as input to the following three areas: (1) Decisions regarding testing set-up and performance. (2) Assessing how best to scale the test phenomena measured. (3) Validating numerical approach for modeling continuous ventilation. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the ventilation tests, and develop and describe numerical methods that can be used to calculate the effects of continuous ventilation. Sensitivity studies to assess the impact of variation of linear power densities (linear heat loads) and ventilation air flow rates are included. The calculation is limited to thermal effect only

  12. The Predictiveness of Achievement Goals

    Directory of Open Access Journals (Sweden)

    Huy P. Phan

    2013-11-01

    Full Text Available Using the Revised Achievement Goal Questionnaire (AGQ-R (Elliot & Murayama, 2008, we explored first-year university students’ achievement goal orientations on the premise of the 2 × 2 model. Similar to recent studies (Elliot & Murayama, 2008; Elliot & Thrash, 2010, we conceptualized a model that included both antecedent (i.e., enactive learning experience and consequence (i.e., intrinsic motivation and academic achievement of achievement goals. Two hundred seventy-seven university students (151 women, 126 men participated in the study. Structural equation modeling procedures yielded evidence that showed the predictive effects of enactive learning experience and mastery goals on intrinsic motivation. Academic achievement was influenced intrinsic motivation, performance-approach goals, and enactive learning experience. Enactive learning experience also served as an antecedent of the four achievement goal types. On the whole, evidence obtained supports the AGQ-R and contributes, theoretically, to 2 × 2 model.

  13. Academic Training: Predicting Natural Catastrophes

    CERN Multimedia

    Françoise Benz

    2005-01-01

    2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 12, 13, 14, 15, 16 December from 11:00 to 12:00 - Main Auditorium, bldg. 500 Predicting Natural Catastrophes E. OKAL / Northwestern University, Evanston, USA 1. Tsunamis -- Introduction Definition of phenomenon - basic properties of the waves Propagation and dispersion Interaction with coasts - Geological and societal effects Origin of tsunamis - natural sources Scientific activities in connection with tsunamis. Ideas about simulations 2. Tsunami generation The earthquake source - conventional theory The earthquake source - normal mode theory The landslide source Near-field observation - The Plafker index Far-field observation - Directivity 3. Tsunami warning General ideas - History of efforts Mantle magnitudes and TREMOR algorithms The challenge of 'tsunami earthquakes' Energy-moment ratios and slow earthquakes Implementation and the components of warning centers 4. Tsunami surveys Principles and methodologies Fifteen years of field surveys and re...

  14. The PredictAD project

    DEFF Research Database (Denmark)

    Antila, Kari; Lötjönen, Jyrki; Thurfjell, Lennart

    2013-01-01

    Alzheimer's disease (AD) is the most common cause of dementia affecting 36 million people worldwide. As the demographic transition in the developed countries progresses towards older population, the worsening ratio of workers per retirees and the growing number of patients with age-related illnes...... candidates and implement the framework in software. The results are currently used in several research projects, licensed to commercial use and being tested for clinical use in several trials....... objective of the PredictAD project was to find and integrate efficient biomarkers from heterogeneous patient data to make early diagnosis and to monitor the progress of AD in a more efficient, reliable and objective manner. The project focused on discovering biomarkers from biomolecular data...

  15. Prediction and probability in sciences

    International Nuclear Information System (INIS)

    Klein, E.; Sacquin, Y.

    1998-01-01

    This book reports the 7 presentations made at the third meeting 'physics and fundamental questions' whose theme was probability and prediction. The concept of probability that was invented to apprehend random phenomena has become an important branch of mathematics and its application range spreads from radioactivity to species evolution via cosmology or the management of very weak risks. The notion of probability is the basis of quantum mechanics and then is bound to the very nature of matter. The 7 topics are: - radioactivity and probability, - statistical and quantum fluctuations, - quantum mechanics as a generalized probability theory, - probability and the irrational efficiency of mathematics, - can we foresee the future of the universe?, - chance, eventuality and necessity in biology, - how to manage weak risks? (A.C.)

  16. Meditation experience predicts introspective accuracy.

    Directory of Open Access Journals (Sweden)

    Kieran C R Fox

    Full Text Available The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1-15,000 hrs experience. Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a 'body-scanning' meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices.

  17. Predicting degradability of organic chemicals

    Energy Technology Data Exchange (ETDEWEB)

    Finizio, A; Vighi, M [Milan Univ. (Italy). Ist. di Entomologia Agraria

    1992-05-01

    Degradability, particularly biodegradability, is one of the most important factors governing the persistence of pollutants in the environment and consequently influencing their behavior and toxicity in aquatic and terrestrial ecosystems. The need for reliable persistence data in order to assess the environmental fate and hazard of chemicals by means of predictive approaches, is evident. Biodegradability tests are requested by the EEC directive on new chemicals. Neverthless, degradation tests are not easy to carry out and data on existing chemicals are very scarce. Therefore, assessing the fate of chemicals in the environment from the simple study of their structure would be a useful tool. Rates of degradation are a function of the rates of a series of processes. Correlation between degradation rates and structural parameters are will be facilitated if one of the processes is rate determining. This review is a survey of studies dealing with relationships between structure and biodegradation of organic chemicals, to identify the value and limitations of this approach.

  18. Unrenormalizable theories can be predictive

    CERN Document Server

    Kubo, J

    2003-01-01

    Unrenormalizable theories contain infinitely many free parameters. Considering these theories in terms of the Wilsonian renormalization group (RG), we suggest a method for removing this large ambiguity. Our basic assumption is the existence of a maximal ultraviolet cutoff in a cutoff theory, and we require that the theory be so fine tuned as to reach the maximal cutoff. The theory so obtained behaves as a local continuum theory to the shortest distance. In concrete examples of the scalar theory we find that at least in a certain approximation to the Wilsonian RG, this requirement enables us to make unique predictions in the infrared regime in terms of a finite number of independent parameters. Therefore, this method might provide a way for calculating quantum corrections in a low-energy effective theory of quantum gravity. (orig.)

  19. Lower-limb growth: how predictable are predictions?

    Science.gov (United States)

    Kelly, Paula M; Diméglio, Alain

    2008-12-01

    The purpose of this review is to clarify the different methods of predictions for growth of the lower limb and to propose a simplified method to calculate the final limb deficit and the correct timing of epiphysiodesis. Lower-limb growth is characterized by four different periods: antenatal growth (exponential); birth to 5 years (rapid growth); 5 years to puberty (stable growth); and puberty, which is the final growth spurt characterized by a rapid acceleration phase lasting 1 year followed by a more gradual deceleration phase lasting 1.5 years. The younger the child, the less precise is the prediction. Repeating measurements can increase the accuracy of predictions and those calculated at the beginning of puberty are the most accurate. The challenge is to reduce the margin of uncertainty. Confrontation of the different parameters-bone age, Tanner signs, annual growth velocity of the standing height, sub-ischial length and sitting height-is the most accurate method. Charts and diagrams are only models and templates. There are many mathematical equations in the literature; we must be able to step back from these rigid calculations because they are a false guarantee. The dynamic of growth needs a flexible approach. There are, however, some rules of thumb that may be helpful for different clinical scenarios. For congenital malformations, at birth the limb length discrepancy must be multiplied by 5 to give the final limb length discrepancy. Multiple by 3 at 1 year of age; by 2 at 3 years in girls and 4 years in boys; by 1.5 at 7 years in girls and boys, by 1.2 at 9 years in girls and 11 years in boys and by 1.1 at the onset of puberty (11 years bone age for girls and 13 years bone age for boys). For the timing of epiphysiodesis, several simple principles must be observed to reduce the margin of error; strict and repeated measurements, rigorous analysis of the data obtained, perfect evaluation of bone age with elbow plus hand radiographs and confirmation with Tanner

  20. PEMS. Advanced predictive emission monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Sandvig Nielsen, J.

    2010-07-15

    In the project PEMS have been developed for boilers, internal combustion engines and gas turbines. The PEMS models have been developed using two principles: The one called ''first principles'' is based on thermo-kinetic modeling of the NO{sub x}-formation by modeling conditions (like temperature, pressure and residence time) in the reaction zones. The other one is data driven using artificial neural network (ANN) and includes no physical properties and no thermo-kinetic formulation. Models of first principles have been developed for gas turbines and gas engines. Data driven models have been developed for gas turbines, gas engines and boilers. The models have been tested on data from sites located in Denmark and the Middle East. Weel and Sandvig has conducted the on-site emission measurements used for development and testing the PEMS models. For gas turbines, both the ''first principles'' and the data driven models have performed excellent considering the ability to reproduce the emission levels of NO{sub x} according to the input variables used for calibration. Data driven models for boilers and gas engines have performed excellent as well. The rather comprehensive first principle model, developed for gas engines, did not perform as well in the prediction of NO{sub x}. Possible a more complex model formulation is required for internal combustion engines. In general, both model types have been validated on data extracted from the data set used for calibration. The data for validation have been selected randomly as individual samplings, and is scattered over the entire measuring campaign. For one natural gas engine a secondary measuring campaign was conducted half a year later than the campaign used for training the data driven model. In the meantime, this engine had been through a refurbishment that included new pistons, piston rings and cylinder linings and cleaning of the cylinder heads. Despite the refurbishment, the

  1. Earthquake predictions using seismic velocity ratios

    Science.gov (United States)

    Sherburne, R. W.

    1979-01-01

    Since the beginning of modern seismology, seismologists have contemplated predicting earthquakes. The usefulness of earthquake predictions to the reduction of human and economic losses and the value of long-range earthquake prediction to planning is obvious. Not as clear are the long-range economic and social impacts of earthquake prediction to a speicifc area. The general consensus of opinion among scientists and government officials, however, is that the quest of earthquake prediction is a worthwhile goal and should be prusued with a sense of urgency. 

  2. Conditional prediction intervals of wind power generation

    DEFF Research Database (Denmark)

    Pinson, Pierre; Kariniotakis, Georges

    2010-01-01

    A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform...... on the characteristics of prediction errors for providing conditional interval forecasts. By simultaneously generating prediction intervals with various nominal coverage rates, one obtains full predictive distributions of wind generation. Adapted resampling is applied here to the case of an onshore Danish wind farm...... to the case of a large number of wind farms in Europe and Australia among others is finally discussed....

  3. Sports Tournament Predictions Using Direct Manipulation.

    Science.gov (United States)

    Vuillemot, Romain; Perin, Charles

    2016-01-01

    An advanced interface for sports tournament predictions uses direct manipulation to allow users to make nonlinear predictions. Unlike previous interface designs, the interface helps users focus on their prediction tasks by enabling them to first choose a winner and then fill out the rest of the bracket. In real-world tests of the proposed interface (for the 2014 FIFA World Cup tournament and 2015/2016 UEFA Champions League), the authors validated the use of direct manipulation as an alternative to widgets. Using visitor interaction logs, they were able to determine the strategies people use to perform predictions and identify potential areas of improvement for further prediction interfaces.

  4. The function and failure of sensory predictions.

    Science.gov (United States)

    Bansal, Sonia; Ford, Judith M; Spering, Miriam

    2018-04-23

    Humans and other primates are equipped with neural mechanisms that allow them to automatically make predictions about future events, facilitating processing of expected sensations and actions. Prediction-driven control and monitoring of perceptual and motor acts are vital to normal cognitive functioning. This review provides an overview of corollary discharge mechanisms involved in predictions across sensory modalities and discusses consequences of predictive coding for cognition and behavior. Converging evidence now links impairments in corollary discharge mechanisms to neuropsychiatric symptoms such as hallucinations and delusions. We review studies supporting a prediction-failure hypothesis of perceptual and cognitive disturbances. We also outline neural correlates underlying prediction function and failure, highlighting similarities across the visual, auditory, and somatosensory systems. In linking basic psychophysical and psychophysiological evidence of visual, auditory, and somatosensory prediction failures to neuropsychiatric symptoms, our review furthers our understanding of disease mechanisms. © 2018 New York Academy of Sciences.

  5. Evaluating predictions of critical oxygen desaturation events

    International Nuclear Information System (INIS)

    ElMoaqet, Hisham; Tilbury, Dawn M; Ramachandran, Satya Krishna

    2014-01-01

    This paper presents a new approach for evaluating predictions of oxygen saturation levels in blood ( SpO 2 ). A performance metric based on a threshold is proposed to evaluate  SpO 2 predictions based on whether or not they are able to capture critical desaturations in the  SpO 2 time series of patients. We use linear auto-regressive models built using historical  SpO 2 data to predict critical desaturation events with the proposed metric. In 20 s prediction intervals, 88%–94% of the critical events were captured with positive predictive values (PPVs) between 90% and 99%. Increasing the prediction horizon to 60 s, 46%–71% of the critical events were detected with PPVs between 81% and 97%. In both prediction horizons, more than 97% of the non-critical events were correctly classified. The overall classification capabilities for the developed predictive models were also investigated. The area under ROC curves for 60 s predictions from the developed models are between 0.86 and 0.98. Furthermore, we investigate the effect of including pulse rate (PR) dynamics in the models and predictions. We show no improvement in the percentage of the predicted critical desaturations if PR dynamics are incorporated into the  SpO 2 predictive models (p-value = 0.814). We also show that including the PR dynamics does not improve the earliest time at which critical  SpO 2 levels are predicted (p-value = 0.986). Our results indicate oxygen in blood is an effective input to the PR rather than vice versa. We demonstrate that the combination of predictive models with frequent pulse oximetry measurements can be used as a warning of critical oxygen desaturations that may have adverse effects on the health of patients. (paper)

  6. Potential for western US seasonal snowpack prediction

    Science.gov (United States)

    Kapnick, Sarah B.; Yang, Xiaosong; Vecchi, Gabriel A.; Delworth, Thomas L.; Gudgel, Rich; Malyshev, Sergey; Milly, Paul C. D.; Shevliakova, Elena; Underwood, Seth; Margulis, Steven A.

    2018-01-01

    Western US snowpack—snow that accumulates on the ground in the mountains—plays a critical role in regional hydroclimate and water supply, with 80% of snowmelt runoff being used for agriculture. While climate projections provide estimates of snowpack loss by the end of th ecentury and weather forecasts provide predictions of weather conditions out to 2 weeks, less progress has been made for snow predictions at seasonal timescales (months to 2 years), crucial for regional agricultural decisions (e.g., plant choice and quantity). Seasonal predictions with climate models first took the form of El Niño predictions 3 decades ago, with hydroclimate predictions emerging more recently. While the field has been focused on single-season predictions (3 months or less), we are now poised to advance our predictions beyond this timeframe. Utilizing observations, climate indices, and a suite of global climate models, we demonstrate the feasibility of seasonal snowpack predictions and quantify the limits of predictive skill 8 month sin advance. This physically based dynamic system outperforms observation-based statistical predictions made on July 1 for March snowpack everywhere except the southern Sierra Nevada, a region where prediction skill is nonexistent for every predictor presently tested. Additionally, in the absence of externally forced negative trends in snowpack, narrow maritime mountain ranges with high hydroclimate variability pose a challenge for seasonal prediction in our present system; natural snowpack variability may inherently be unpredictable at this timescale. This work highlights present prediction system successes and gives cause for optimism for developing seasonal predictions for societal needs.

  7. Similarities and Differences Between Warped Linear Prediction and Laguerre Linear Prediction

    NARCIS (Netherlands)

    Brinker, Albertus C. den; Krishnamoorthi, Harish; Verbitskiy, Evgeny A.

    2011-01-01

    Linear prediction has been successfully applied in many speech and audio processing systems. This paper presents the similarities and differences between two classes of linear prediction schemes, namely, Warped Linear Prediction (WLP) and Laguerre Linear Prediction (LLP). It is shown that both

  8. Radon observation for earthquake prediction

    Energy Technology Data Exchange (ETDEWEB)

    Wakita, Hiroshi [Tokyo Univ. (Japan)

    1998-12-31

    Systematic observation of groundwater radon for the purpose of earthquake prediction began in Japan in late 1973. Continuous observations are conducted at fixed stations using deep wells and springs. During the observation period, significant precursory changes including the 1978 Izu-Oshima-kinkai (M7.0) earthquake as well as numerous coseismic changes were observed. At the time of the 1995 Kobe (M7.2) earthquake, significant changes in chemical components, including radon dissolved in groundwater, were observed near the epicentral region. Precursory changes are presumably caused by permeability changes due to micro-fracturing in basement rock or migration of water from different sources during the preparation stage of earthquakes. Coseismic changes may be caused by seismic shaking and by changes in regional stress. Significant drops of radon concentration in groundwater have been observed after earthquakes at the KSM site. The occurrence of such drops appears to be time-dependent, and possibly reflects changes in the regional stress state of the observation area. The absence of radon drops seems to be correlated with periods of reduced regional seismic activity. Experience accumulated over the two past decades allows us to reach some conclusions: 1) changes in groundwater radon do occur prior to large earthquakes; 2) some sites are particularly sensitive to earthquake occurrence; and 3) the sensitivity changes over time. (author)

  9. Solar Flares and Their Prediction

    Science.gov (United States)

    Adams, Mitzi L.

    1999-01-01

    Solar flares and coronal mass ejection's (CMES) can strongly affect the local environment at the Earth. A major challenge for solar physics is to understand the physical mechanisms responsible for the onset of solar flares. Flares, characterized by a sudden release of energy (approx. 10(exp 32) ergs for the largest events) within the solar atmosphere, result in the acceleration of electrons, protons, and heavier ions as well as the production of electromagnetic radiation from hard X-rays to km radio waves (wavelengths approx. = 10(exp -9) cm to 10(exp 6) cm). Observations suggest that solar flares and sunspots are strongly linked. For example, a study of data from 1956-1969, reveals that approx. 93 percent of major flares originate in active regions with spots. Furthermore, the global structure of the sunspot magnetic field can be correlated with flare activity. This talk will review what we know about flare causes and effects and will discuss techniques for quantifying parameters, which may lead to a prediction of solar flares.

  10. Incorrect predictions reduce switch costs.

    Science.gov (United States)

    Kleinsorge, Thomas; Scheil, Juliane

    2015-07-01

    In three experiments, we combined two sources of conflict within a modified task-switching procedure. The first source of conflict was the one inherent in any task switching situation, namely the conflict between a task set activated by the recent performance of another task and the task set needed to perform the actually relevant task. The second source of conflict was induced by requiring participants to guess aspects of the upcoming task (Exps. 1 & 2: task identity; Exp. 3: position of task precue). In case of an incorrect guess, a conflict accrues between the representation of the guessed task and the actually relevant task. In Experiments 1 and 2, incorrect guesses led to an overall increase of reaction times and error rates, but they reduced task switch costs compared to conditions in which participants predicted the correct task. In Experiment 3, incorrect guesses resulted in faster performance overall and to a selective decrease of reaction times in task switch trials when the cue-target interval was long. We interpret these findings in terms of an enhanced level of controlled processing induced by a combination of two sources of conflict converging upon the same target of cognitive control. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Parallel Prediction of Stock Volatility

    Directory of Open Access Journals (Sweden)

    Priscilla Jenq

    2017-10-01

    Full Text Available Volatility is a measurement of the risk of financial products. A stock will hit new highs and lows over time and if these highs and lows fluctuate wildly, then it is considered a high volatile stock. Such a stock is considered riskier than a stock whose volatility is low. Although highly volatile stocks are riskier, the returns that they generate for investors can be quite high. Of course, with a riskier stock also comes the chance of losing money and yielding negative returns. In this project, we will use historic stock data to help us forecast volatility. Since the financial industry usually uses S&P 500 as the indicator of the market, we will use S&P 500 as a benchmark to compute the risk. We will also use artificial neural networks as a tool to predict volatilities for a specific time frame that will be set when we configure this neural network. There have been reports that neural networks with different numbers of layers and different numbers of hidden nodes may generate varying results. In fact, we may be able to find the best configuration of a neural network to compute volatilities. We will implement this system using the parallel approach. The system can be used as a tool for investors to allocating and hedging assets.

  12. Color prediction in textile application

    Science.gov (United States)

    De Lucia, Maurizio; Buonopane, Massimo

    2004-09-01

    Nowadays production systems of fancy yarns for knits allow the creation of extremely complex products in which many effects are obtained by means of color alteration. Current production technique consists in defining type and quantity of fibers by making preliminary samples. This samples are then compared with a reference one. This comparison is based on operator experience. Many samples are required in order to achieve a sample similar to the reference one. This work requires time and then additional costs for a textile manufacturer. In addition, the methodology is subjective. Nowadays, spectrophotometers are the only devices that seem to be able to provide objective indications. They are based on a spectral analysis of the light reflected by the knit material. In this paper the study of a new method for color evaluation of a mix of wool fibers with different colors is presented. First of all fiber characterization were carried out through scattering and absorption coefficients using the Kubelka-Munk theory. Then the estimated color was compared with a reference item, in order to define conformity by means of objective parameters. Finally, theoretical characterization was compared with the measured quantity. This allowed estimation of prediction quality.

  13. Predictable repair of provisional restorations.

    Science.gov (United States)

    Hammond, Barry D; Cooper, Jeril R; Lazarchik, David A

    2009-01-01

    The importance of provisional restorations is often downplayed, as they are thought of by some as only "temporaries." As a result, a less-than-ideal provisional is sometimes fabricated, in part because of the additional chair time required to make provisional modifications when using traditional techniques. Additionally, in many dental practices, these provisional restorations are often fabricated by auxillary personnel who may not be as well trained in the fabrication process. Because provisionals play an important role in achieving the desired final functional and esthetic result, a high-quality provisional restoration is essential to fabricating a successful definitive restoration. This article describes a method for efficiently and predictably repairing both methacrylate and bis-acryl provisional restorations using flowable composite resin. By use of this relatively simple technique, provisional restorations can now be modified or repaired in a timely and productive manner to yield an exceptional result. Successful execution of esthetic and restorative dentistry requires attention to detail in every aspect of the case. Fabrication of high-quality provisional restorations can, at times, be challenging and time consuming. The techniques for optimizing resin provisional restorations as described in this paper are pragmatic and will enhance the delivery of dental treatment.

  14. Entropy and the Predictability of Online Life

    Directory of Open Access Journals (Sweden)

    Roberta Sinatra

    2014-01-01

    Full Text Available Using mobile phone records and information theory measures, our daily lives have been recently shown to follow strict statistical regularities, and our movement patterns are, to a large extent, predictable. Here, we apply entropy and predictability measures to two datasets of the behavioral actions and the mobility of a large number of players in the virtual universe of a massive multiplayer online game. We find that movements in virtual human lives follow the same high levels of predictability as offline mobility, where future movements can, to some extent, be predicted well if the temporal correlations of visited places are accounted for. Time series of behavioral actions show similar high levels of predictability, even when temporal correlations are neglected. Entropy conditional on specific behavioral actions reveals that in terms of predictability, negative behavior has a wider variety than positive actions. The actions that contain the information to best predict an individual’s subsequent action are negative, such as attacks or enemy markings, while the positive actions of friendship marking, trade and communication contain the least amount of predictive information. These observations show that predicting behavioral actions requires less information than predicting the mobility patterns of humans for which the additional knowledge of past visited locations is crucial and that the type and sign of a social relation has an essential impact on the ability to determine future behavior.

  15. Collaboratory for the Study of Earthquake Predictability

    Science.gov (United States)

    Schorlemmer, D.; Jordan, T. H.; Zechar, J. D.; Gerstenberger, M. C.; Wiemer, S.; Maechling, P. J.

    2006-12-01

    Earthquake prediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquake predictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquake predictability. This Collaboratory for the Study of Earthquake Predictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

  16. Predicting Well-Being in Europe?

    DEFF Research Database (Denmark)

    Hussain, M. Azhar

    2015-01-01

    Has the worst financial and economic crisis since the 1930s reduced the subjective wellbeing function's predictive power? Regression models for happiness are estimated for the three first rounds of the European Social Survey (ESS); 2002, 2004 and 2006. Several explanatory variables are significant...... happiness. Nevertheless, 73% of the predictions in 2008 and 57% of predictions in 2010 were within the margin of error. These correct prediction percentages are not unusually low - rather they are slightly higher than before the crisis. It is surprising that happiness predictions are not adversely affected...... by the crisis. On the other hand, results are consistent with the adaption hypothesis. The same exercise is conducted applying life satisfaction instead of happiness, but we reject, against expectation, that (more transient) happiness is harder to predict than life satisfaction. Fifteen ESS countries surveyed...

  17. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  18. Probabilistic approach to earthquake prediction.

    Directory of Open Access Journals (Sweden)

    G. D'Addezio

    2002-06-01

    Full Text Available The evaluation of any earthquake forecast hypothesis requires the application of rigorous statistical methods. It implies a univocal definition of the model characterising the concerned anomaly or precursor, so as it can be objectively recognised in any circumstance and by any observer.A valid forecast hypothesis is expected to maximise successes and minimise false alarms. The probability gain associated to a precursor is also a popular way to estimate the quality of the predictions based on such precursor. Some scientists make use of a statistical approach based on the computation of the likelihood of an observed realisation of seismic events, and on the comparison of the likelihood obtained under different hypotheses. This method can be extended to algorithms that allow the computation of the density distribution of the conditional probability of earthquake occurrence in space, time and magnitude. Whatever method is chosen for building up a new hypothesis, the final assessment of its validity should be carried out by a test on a new and independent set of observations. The implementation of this test could, however, be problematic for seismicity characterised by long-term recurrence intervals. Even using the historical record, that may span time windows extremely variable between a few centuries to a few millennia, we have a low probability to catch more than one or two events on the same fault. Extending the record of earthquakes of the past back in time up to several millennia, paleoseismology represents a great opportunity to study how earthquakes recur through time and thus provide innovative contributions to time-dependent seismic hazard assessment. Sets of paleoseimologically dated earthquakes have been established for some faults in the Mediterranean area: the Irpinia fault in Southern Italy, the Fucino fault in Central Italy, the El Asnam fault in Algeria and the Skinos fault in Central Greece. By using the age of the

  19. Can we predict shoulder dystocia?

    Science.gov (United States)

    Revicky, Vladimir; Mukhopadhyay, Sambit; Morris, Edward P; Nieto, Jose J

    2012-02-01

    To analyse the significance of risk factors and the possibility of prediction of shoulder dystocia. This was a retrospective cohort study. There were 9,767 vaginal deliveries at 37 and more weeks of gestation analysed during 2005-2007. Studied population included 234 deliveries complicated by shoulder dystocia. Shoulder dystocia was defined as a delivery that required additional obstetric manoeuvres to release the shoulders after gentle downward traction has failed. First, a univariate analysis was done to identify the factors that had a significant association with shoulder dystocia. Parity, age, gestation, induction of labour, epidural analgesia, birth weight, duration of second stage of labour and mode of delivery were studied factors. All factors were then combined in a multivariate logistic regression analysis. Adjusted odds ratios (Adj. OR) with 95% confidence intervals (CI) were calculated. The incidence of shoulder dystocia was 2.4% (234/9,767). Only mode of delivery and birth weight were independent risk factors for shoulder dystocia. Parity, age, gestation, induction of labour, epidural analgesia and duration of second stage of labour were not independent risk factors. Ventouse delivery increases the risk of shoulder dystocia almost 3 times, forceps delivery comparing to the ventouse delivery increases risk almost 3.4 times. Risk of shoulder dystocia is minimal with the birth weight of 3,000 g or less. It is difficult to foretell the exact birth weight and the mode of delivery, therefore occurrence of shoulder dystocia is highly unpredictable. Regular drills for shoulder dystocia and awareness of increased incidence with instrumental deliveries are important to reduce fetal and maternal morbidity and mortality.

  20. Lifestyle Markers Predict Cognitive Function.

    Science.gov (United States)

    Masley, Steven C; Roetzheim, Richard; Clayton, Gwendolyn; Presby, Angela; Sundberg, Kelley; Masley, Lucas V

    2017-01-01

    Rates of mild cognitive impairment and Alzheimer's disease are increasing rapidly. None of the current treatment regimens for Alzheimer's disease are effective in arresting progression. Lifestyle choices may prevent cognitive decline. This study aims to clarify which factors best predict cognitive function. This was a prospective cross-sectional analysis of 799 men and women undergoing health and cognitive testing every 1 to 3 years at an outpatient center. This study utilizes data collected from the first patient visit. Participant ages were 18 to 88 (mean = 50.7) years and the sample was 26.6% female and 73.4% male. Measurements were made of body composition, fasting laboratory and anthropometric measures, strength and aerobic fitness, nutrient and dietary intake, and carotid intimal media thickness (IMT). Each participant was tested with a computerized neurocognitive test battery. Cognitive outcomes were assessed in bivariate analyses using t-tests and correlation coefficients and in multivariable analysis (controlling for age) using multiple linear regression. The initial bivariate analyses showed better Neurocognitive Index (NCI) scores with lower age, greater fitness scores (push-up strength, VO 2 max, and exercise duration during treadmill testing), and lower fasting glucose levels. Better cognitive flexibility scores were also noted with younger age, lower systolic blood pressure, lower body fat, lower carotid IMT scores, greater fitness, and higher alcohol intake. After controlling for age, factors that remained associated with better NCI scores include no tobacco use, lower fasting glucose levels, and better fitness (aerobic and strength). Higher cognitive flexibility scores remained associated with greater aerobic and strength fitness, lower body fat, and higher intake of alcohol. Modifiable biomarkers that impact cognitive performance favorably include greater aerobic fitness and strength, lower blood sugar levels, greater alcohol intake, lower body

  1. SEIZURE PREDICTION: THE FOURTH INTERNATIONAL WORKSHOP

    Science.gov (United States)

    Zaveri, Hitten P.; Frei, Mark G.; Arthurs, Susan; Osorio, Ivan

    2010-01-01

    The recently convened Fourth International Workshop on Seizure Prediction (IWSP4) brought together a diverse international group of investigators, from academia and industry, including epileptologists, neurosurgeons, neuroscientists, computer scientists, engineers, physicists, and mathematicians who are conducting interdisciplinary research on the prediction and control of seizures. IWSP4 allowed the presentation and discussion of results, an exchange of ideas, an assessment of the status of seizure prediction, control and related fields and the fostering of collaborative projects. PMID:20674508

  2. Forecasting hotspots using predictive visual analytics approach

    Science.gov (United States)

    Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David

    2014-12-30

    A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.

  3. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  4. Protein secondary structure: category assignment and predictability

    DEFF Research Database (Denmark)

    Andersen, Claus A.; Bohr, Henrik; Brunak, Søren

    2001-01-01

    In the last decade, the prediction of protein secondary structure has been optimized using essentially one and the same assignment scheme known as DSSP. We present here a different scheme, which is more predictable. This scheme predicts directly the hydrogen bonds, which stabilize the secondary......-forward neural network with one hidden layer on a data set identical to the one used in earlier work....

  5. Applications of contact predictions to structural biology

    Directory of Open Access Journals (Sweden)

    Felix Simkovic

    2017-05-01

    Full Text Available Evolutionary pressure on residue interactions, intramolecular or intermolecular, that are important for protein structure or function can lead to covariance between the two positions. Recent methodological advances allow much more accurate contact predictions to be derived from this evolutionary covariance signal. The practical application of contact predictions has largely been confined to structural bioinformatics, yet, as this work seeks to demonstrate, the data can be of enormous value to the structural biologist working in X-ray crystallography, cryo-EM or NMR. Integrative structural bioinformatics packages such as Rosetta can already exploit contact predictions in a variety of ways. The contribution of contact predictions begins at construct design, where structural domains may need to be expressed separately and contact predictions can help to predict domain limits. Structure solution by molecular replacement (MR benefits from contact predictions in diverse ways: in difficult cases, more accurate search models can be constructed using ab initio modelling when predictions are available, while intermolecular contact predictions can allow the construction of larger, oligomeric search models. Furthermore, MR using supersecondary motifs or large-scale screens against the PDB can exploit information, such as the parallel or antiparallel nature of any β-strand pairing in the target, that can be inferred from contact predictions. Contact information will be particularly valuable in the determination of lower resolution structures by helping to assign sequence register. In large complexes, contact information may allow the identity of a protein responsible for a certain region of density to be determined and then assist in the orientation of an available model within that density. In NMR, predicted contacts can provide long-range information to extend the upper size limit of the technique in a manner analogous but complementary to experimental

  6. Predicting Process Behaviour using Deep Learning

    OpenAIRE

    Evermann, Joerg; Rehse, Jana-Rebecca; Fettke, Peter

    2016-01-01

    Predicting business process behaviour is an important aspect of business process management. Motivated by research in natural language processing, this paper describes an application of deep learning with recurrent neural networks to the problem of predicting the next event in a business process. This is both a novel method in process prediction, which has largely relied on explicit process models, and also a novel application of deep learning methods. The approach is evaluated on two real da...

  7. Audiovisual biofeedback improves motion prediction accuracy.

    Science.gov (United States)

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  8. Prediction methods and databases within chemoinformatics

    DEFF Research Database (Denmark)

    Jónsdóttir, Svava Osk; Jørgensen, Flemming Steen; Brunak, Søren

    2005-01-01

    MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability...

  9. Sports Tournament Predictions Using Direct Manipulation

    OpenAIRE

    Vuillemot , Romain; Perin , Charles

    2016-01-01

    An advanced interface for sports tournament predictions uses direct manipulation to allow users to make nonlinear predictions. Unlike previous interface designs, the interface helps users focus on their prediction tasks by enabling them to first choose a winner and then fill out the rest of the bracket. In real-world tests of the proposed interface (for the 2014 FIFA World Cup tournament and 2015/2016 UEFA Champions League), the authors validated the use of direct manipulation as an alternati...

  10. Understanding predictability and exploration in human mobility

    DEFF Research Database (Denmark)

    Cuttone, Andrea; Jørgensen, Sune Lehmann; González, Marta C.

    2018-01-01

    Predictive models for human mobility have important applications in many fields including traffic control, ubiquitous computing, and contextual advertisement. The predictive performance of models in literature varies quite broadly, from over 90% to under 40%. In this work we study which underlying...... strong influence on the accuracy of prediction. Finally we reveal that the exploration of new locations is an important factor in human mobility, and we measure that on average 20-25% of transitions are to new places, and approx. 70% of locations are visited only once. We discuss how these mechanisms...... are important factors limiting our ability to predict human mobility....

  11. Stock market index prediction using neural networks

    Science.gov (United States)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  12. Decadel climate prediction: challenges and opportunities

    International Nuclear Information System (INIS)

    Hurrell, J W

    2008-01-01

    The scientific understanding of climate change is now sufficiently clear to show that climate change from global warming is already upon us, and the rate of change as projected exceeds anything seen in nature in the past 10,000 years. Uncertainties remain, however, especially regarding how climate will change at regional and local scales where the signal of natural variability is large. Addressing many of these uncertainties will require a movement toward high resolution climate system predictions, with a blurring of the distinction between shorter-term predictions and longer-term climate projections. The key is the realization that climate system predictions, regardless of timescale, will require initialization of coupled general circulation models with best estimates of the current observed state of the atmosphere, oceans, cryosphere, and land surface. Formidable challenges exist: for instance, what is the best method of initialization given imperfect observations and systematic errors in models? What effect does initialization have on climate predictions? What predictions should be attempted, and how would they be verified? Despite such challenges, the unrealized predictability that resides in slowly evolving phenomena, such as ocean current systems, is of paramount importance for society to plan and adapt for the next few decades. Moreover, initialized climate predictions will require stronger collaboration with shared knowledge, infrastructure and technical capabilities among those in the weather and climate prediction communities. The potential benefits include improved understanding and predictions on all time scales

  13. Deterministic prediction of surface wind speed variations

    Directory of Open Access Journals (Sweden)

    G. V. Drisya

    2014-11-01

    Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.

  14. Implementation of short-term prediction

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L; Joensen, A; Giebel, G [and others

    1999-03-01

    This paper will giver a general overview of the results from a EU JOULE funded project (`Implementing short-term prediction at utilities`, JOR3-CT95-0008). Reference will be given to specialised papers where applicable. The goal of the project was to implement wind farm power output prediction systems in operational environments at a number of utilities in Europe. Two models were developed, one by Risoe and one by the Technical University of Denmark (DTU). Both prediction models used HIRLAM predictions from the Danish Meteorological Institute (DMI). (au) EFP-94; EU-JOULE. 11 refs.

  15. Stock price prediction using geometric Brownian motion

    Science.gov (United States)

    Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM

    2018-03-01

    Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.

  16. Seasonal climate prediction for North Eurasia

    International Nuclear Information System (INIS)

    Kryjov, Vladimir N

    2012-01-01

    An overview of the current status of the operational seasonal climate prediction for North Eurasia is presented. It is shown that the performance of existing climate models is rather poor in seasonal prediction for North Eurasia. Multi-model ensemble forecasts are more reliable than single-model ones; however, for North Eurasia they tend to be close to climatological ones. Application of downscaling methods may improve predictions for some locations (or regions). However, general improvement of the reliability of seasonal forecasts for North Eurasia requires improvement of the climate prediction models. (letter)

  17. Predictions of High Energy Experimental Results

    Directory of Open Access Journals (Sweden)

    Comay E.

    2010-10-01

    Full Text Available Eight predictions of high energy experimental results are presented. The predictions contain the $Sigma ^+$ charge radius and results of two kinds of experiments using energetic pionic beams. In addition, predictions of the failure to find the following objects are presented: glueballs, pentaquarks, Strange Quark Matter, magnetic monopoles searched by their direct interaction with charges and the Higgs boson. The first seven predictions rely on the Regular Charge-Monopole Theory and the last one relies on mathematical inconsistencies of the Higgs Lagrangian density.

  18. Tail Risk Premia and Return Predictability

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Todorov, Viktor; Xu, Lai

    The variance risk premium, defined as the difference between actual and risk-neutralized expectations of the forward aggregate market variation, helps predict future market returns. Relying on new essentially model-free estimation procedure, we show that much of this predictability may be attribu......The variance risk premium, defined as the difference between actual and risk-neutralized expectations of the forward aggregate market variation, helps predict future market returns. Relying on new essentially model-free estimation procedure, we show that much of this predictability may......-varying economic uncertainty and changes in risk aversion, or market fears, respectively....

  19. Recent Advances in Predictive (Machine) Learning

    Energy Technology Data Exchange (ETDEWEB)

    Friedman, J

    2004-01-24

    Prediction involves estimating the unknown value of an attribute of a system under study given the values of other measured attributes. In prediction (machine) learning the prediction rule is derived from data consisting of previously solved cases. Most methods for predictive learning were originated many years ago at the dawn of the computer age. Recently two new techniques have emerged that have revitalized the field. These are support vector machines and boosted decision trees. This paper provides an introduction to these two new methods tracing their respective ancestral roots to standard kernel methods and ordinary decision trees.

  20. Final Technical Report: Increasing Prediction Accuracy.

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.