WorldWideScience

Sample records for tangentially geostrophic assumptions

  1. Tests of the frozen-flux and tangentially geostrophic assumptions using magnetic satellite data

    DEFF Research Database (Denmark)

    Chulliat, A.; Olsen, Nils; Sabaka, T.

    In 1984, Jean-Louis Le Mouël published a paper suggesting that the flow at the top of the Earth’s core is tangentially geostrophic, i.e., the Lorentz force is much smaller than the Coriolis force in this particular region of the core. This new assumption wassubsequently used to discriminate among...

  2. Derivation of Inviscid Quasi-geostrophic Equation from Rotational Compressible Magnetohydrodynamic Flows

    Science.gov (United States)

    Kwon, Young-Sam; Lin, Ying-Chieh; Su, Cheng-Fang

    2018-04-01

    In this paper, we consider the compressible models of magnetohydrodynamic flows giving rise to a variety of mathematical problems in many areas. We derive a rigorous quasi-geostrophic equation governed by magnetic field from the rotational compressible magnetohydrodynamic flows with the well-prepared initial data. It is a first derivation of quasi-geostrophic equation governed by the magnetic field, and the tool is based on the relative entropy method. This paper covers two results: the existence of the unique local strong solution of quasi-geostrophic equation with the good regularity and the derivation of a quasi-geostrophic equation.

  3. Random forcing of geostrophic motion in rotating stratified turbulence

    Science.gov (United States)

    Waite, Michael L.

    2017-12-01

    Random forcing of geostrophic motion is a common approach in idealized simulations of rotating stratified turbulence. Such forcing represents the injection of energy into large-scale balanced motion, and the resulting breakdown of quasi-geostrophic turbulence into inertia-gravity waves and stratified turbulence can shed light on the turbulent cascade processes of the atmospheric mesoscale. White noise forcing is commonly employed, which excites all frequencies equally, including frequencies much higher than the natural frequencies of large-scale vortices. In this paper, the effects of these high frequencies in the forcing are investigated. Geostrophic motion is randomly forced with red noise over a range of decorrelation time scales τ, from a few time steps to twice the large-scale vortex time scale. It is found that short τ (i.e., nearly white noise) results in about 46% more gravity wave energy than longer τ, despite the fact that waves are not directly forced. We argue that this effect is due to wave-vortex interactions, through which the high frequencies in the forcing are able to excite waves at their natural frequencies. It is concluded that white noise forcing should be avoided, even if it is only applied to the geostrophic motion, when a careful investigation of spontaneous wave generation is needed.

  4. Quasi-geostrophic dynamics in the presence of moisture gradients

    OpenAIRE

    Monteiro, Joy M.; Sukhatme, Jai

    2016-01-01

    The derivation of a quasi-geostrophic (QG) system from the rotating shallow water equations on a midlatitude beta-plane coupled with moisture is presented. Condensation is prescribed to occur whenever the moisture at a point exceeds a prescribed saturation value. It is seen that a slow condensation time scale is required to obtain a consistent set of equations at leading order. Further, since the advecting wind fields are geostrophic, changes in moisture (and hence, precipitation) occur only ...

  5. Implementation of large-scale average geostrophic wind shear in WAsP12.1

    DEFF Research Database (Denmark)

    Floors, Rogier Ralph; Troen, Ib; Kelly, Mark C.

    The vertical extrapolation model described in the European Wind Atlas Troen and Petersen (1989) is modified to take into account large-scale average geostrophic wind shear to describe the effect of horizontal temperature gradients on the geostrophic wind. The method is implemented by extracting...... the average geostrophic wind shear from Climate Forecast System Reanalysis (CFSR) data and the values of nearest grid point are automatically used in the WAsP 12.1 user interface to provide better AEP predictions....

  6. On the consequences of strong stable stratification at the top of earth's outer core

    Science.gov (United States)

    Bloxham, Jeremy

    1990-01-01

    The consequences of strong stable stratification at the top of the earth's fluid outer core are considered, concentrating on the generation of the geomagnetic secular variation. It is assumed that the core near the core-mantle boundary is both strongly stably stratified and free of Lorentz forces: it is found that this set of assumptions severely limits the class of possible motions, none of which is compatible with the geomagnetic secular variation. Relaxing either assumption is adequate: tangentially geostrophic flows are consistent with the secular variation if the assumption that the core is strongly stably stratified is relaxed (while retaining the assumption that Lorentz forces are negligible); purely toroidal flows may explain the secular variation if Lorentz forces are included.

  7. Currents, Geostrophic, Aviso, 0.25 degrees, Global, Meridional

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Aviso Meridional Geostrophic Current is inferred from Sea Surface Height Deviation, climatological dynamic height, and basic fluid mechanics.

  8. Currents, Geostrophic, Aviso, 0.25 degrees, Global, Zonal

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Aviso Zonal Geostrophic Current is inferred from Sea Surface Height Deviation, climatological dynamic height, and basic fluid mechanics.

  9. Effects of anisotropy in geostrophic turbulence

    Czech Academy of Sciences Publication Activity Database

    Hejda, Pavel; Reshetnyak, M.

    2009-01-01

    Roč. 177, č. 3-4 (2009), s. 152-160 ISSN 0031-9201 R&D Projects: GA AV ČR IAA300120704 Institutional research plan: CEZ:AV0Z30120515 Keywords : liquid core * thermal convection * geostrophic balance * cascade processes Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 1.993, year: 2009

  10. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  11. Do uniform tangential interfacial stresses enhance adhesion?

    Science.gov (United States)

    Menga, Nicola; Carbone, Giuseppe; Dini, Daniele

    2018-03-01

    We present theoretical arguments, based on linear elasticity and thermodynamics, to show that interfacial tangential stresses in sliding adhesive soft contacts may lead to a significant increase of the effective energy of adhesion. A sizable expansion of the contact area is predicted in conditions corresponding to such scenario. These results are easily explained and are valid under the assumptions that: (i) sliding at the interface does not lead to any loss of adhesive interaction and (ii) spatial fluctuations of frictional stresses can be considered negligible. Our results are seemingly supported by existing experiments, and show that frictional stresses may lead to an increase of the effective energy of adhesion depending on which conditions are established at the interface of contacting bodies in the presence of adhesive forces.

  12. Arctic Ocean surface geostrophic circulation 2003–2014

    Directory of Open Access Journals (Sweden)

    T. W. K. Armitage

    2017-07-01

    Full Text Available Monitoring the surface circulation of the ice-covered Arctic Ocean is generally limited in space, time or both. We present a new 12-year record of geostrophic currents at monthly resolution in the ice-covered and ice-free Arctic Ocean derived from satellite radar altimetry and characterise their seasonal to decadal variability from 2003 to 2014, a period of rapid environmental change in the Arctic. Geostrophic currents around the Arctic basin increased in the late 2000s, with the largest increases observed in summer. Currents in the southeastern Beaufort Gyre accelerated in late 2007 with higher current speeds sustained until 2011, after which they decreased to speeds representative of the period 2003–2006. The strength of the northwestward current in the southwest Beaufort Gyre more than doubled between 2003 and 2014. This pattern of changing currents is linked to shifting of the gyre circulation to the northwest during the time period. The Beaufort Gyre circulation and Fram Strait current are strongest in winter, modulated by the seasonal strength of the atmospheric circulation. We find high eddy kinetic energy (EKE congruent with features of the seafloor bathymetry that are greater in winter than summer, and estimates of EKE and eddy diffusivity in the Beaufort Sea are consistent with those predicted from theoretical considerations. The variability of Arctic Ocean geostrophic circulation highlights the interplay between seasonally variable atmospheric forcing and ice conditions, on a backdrop of long-term changes to the Arctic sea ice–ocean system. Studies point to various mechanisms influencing the observed increase in Arctic Ocean surface stress, and hence geostrophic currents, in the 2000s – e.g. decreased ice concentration/thickness, changing atmospheric forcing, changing ice pack morphology; however, more work is needed to refine the representation of atmosphere–ice–ocean coupling in models before we can fully

  13. Referencing geostrophic velocities using ADCP data Referencing geostrophic velocities using ADCP data

    Directory of Open Access Journals (Sweden)

    Isis Comas-Rodríguez

    2010-06-01

    Full Text Available Acoustic Doppler Current Profilers (ADCPs have proven to be a useful oceanographic tool in the study of ocean dynamics. Data from D279, a transatlantic hydrographic cruise carried out in spring 2004 along 24.5°N, were processed, and lowered ADCP (LADCP bottom track data were used to assess the choice of reference velocity for geostrophic calculations. The reference velocities from different combinations of ADCP data were compared to one another and a reference velocity was chosen based on the LADCP data. The barotropic tidal component was subtracted to provide a final reference velocity estimated by LADCP data. The results of the velocity fields are also shown. Further studies involving inverse solutions will include the reference velocity calculated here.

  14. Downscaling ocean conditions: Experiments with a quasi-geostrophic model

    Science.gov (United States)

    Katavouta, A.; Thompson, K. R.

    2013-12-01

    The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.

  15. Retinal Changes Induced by Epiretinal Tangential Forces

    Directory of Open Access Journals (Sweden)

    Mario R. Romano

    2015-01-01

    Full Text Available Two kinds of forces are active in vitreoretinal traction diseases: tangential and anterior-posterior forces. However, tangential forces are less characterized and classified in literature compared to the anterior-posterior ones. Tangential epiretinal forces are mainly due to anomalous posterior vitreous detachment (PVD, vitreoschisis, vitreopapillary adhesion (VPA, and epiretinal membranes (ERMs. Anomalous PVD plays a key role in the formation of the tangential vectorial forces on the retinal surface as consequence of gel liquefaction (synchysis without sufficient and fast vitreous dehiscence at the vitreoretinal interface. The anomalous and persistent adherence of the posterior hyaloid to the retina can lead to vitreomacular/vitreopapillary adhesion or to a formation of avascular fibrocellular tissue (ERM resulting from the proliferation and transdifferentiation of hyalocytes resident in the cortical vitreous remnants after vitreoschisis. The right interpretation of the forces involved in the epiretinal tangential tractions helps in a better definition of diagnosis, progression, prognosis, and surgical outcomes of vitreomacular interfaces.

  16. What can asymptotic expansions tell us about large-scale quasi-geostrophic anticyclonic vortices?

    Directory of Open Access Journals (Sweden)

    A. Stegner

    1995-01-01

    Full Text Available The problem of the large-scale quasi-geostrophic anticyclonic vortices is studied in the framework of the baratropic rotating shallow- water equations on the β-plane. A systematic approach based on the multiplescale asymptotic expansions is used leading to a hierarchy of governing equations for the large-scale vortices depending on their characteristic size, velocity and a free surface elevation. Among them are the Charney-Obukhov equation, the intermediate geostrophic model equation, the frontal dynamics equation and some new nonlinear quasi-geostrophic equation. We are looking for steady-drifting axisymmetric anticyclonic solutions and find them in a consistent way only in this last equation. These solutions are soliton-like in the sense that the effects of weak non-linearity and dispersion balance each other. The same regimes on the paraboloidal β-plane are studied, all giving a negative result in what concerns the axisymmetric steady solutions, except for a strong elevation case where any circular profile is found to be steadily propagating within the accuracy of the approximation.

  17. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    Science.gov (United States)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  18. Experimental Highlights upon Tangential Percussions in Mechanical Systems

    Directory of Open Access Journals (Sweden)

    Stelian Alaci

    2014-12-01

    Full Text Available The paper presents a proposed method for underlying the presence of tangential percussions occurring in multibody systems. Tangential percussions define a relatively newly introduced concept required by the necessity of explaining the sudden change in the state of motion for two bodies interacting only on a direction from the common tangent plane. In robotics domain, normal and tangential percussions are widely met in the case of robotic hands in the moment of contacting the manipulated object.

  19. Greater Role of Geostrophic Currents on Ekman Dynamics in the Western Arctic Ocean as a Mechanism for Beaufort Gyre Stabilization

    Science.gov (United States)

    Steele, M.; Zhong, W.; Zhang, J.; Zhao, J.

    2017-12-01

    Seven different methods, with and without including geostrophic currents, were used to explore Ekman dynamics in the western Arctic Ocean for the period 1992-2014. Results show that surface geostrophic currents have been increasing and are much stronger than Ekman layer velocities in recent years (2003-2014) when the oceanic Beaufort Gyre (BG) is spinning up in the region. The new methods that include geostrophic currents result in more realistic Ekman pumping velocities than a previous iterative method that does not consider geostrophic currents and therefore overestimates Ekman pumping velocities by up to 52% in the central area of the BG over the period 2003-2014. When the BG is spinning up as seen in recent years, geostrophic currents become stronger, which tend to modify the ice-ocean stress and to cause an Ekman divergence that counteracts wind-driven Ekman convergence in the Canada Basin. This is a mechanism we have identified to play an important and growing role in stabilizing the Ekman convergence and therefore the BG in recent years. This mechanism may be used to explain three scenarios that describe the interplay of changes in wind forcing, sea ice motion, and geostrophic currents that control the variability of the Ekman dynamics in the central BG during 1992-2014. Results also reveal several upwelling regions in the southern and northern Canada Basin and the Chukchi Abyssal Plain which may plays a significant role in biological processes in these regions.

  20. Greater Role of Geostrophic Currents in Ekman Dynamics in the Western Arctic Ocean as a Mechanism for Beaufort Gyre Stabilization

    Science.gov (United States)

    Zhong, Wenli; Steele, Michael; Zhang, Jinlun; Zhao, Jinping

    2018-01-01

    Seven different methods, with and without including geostrophic currents, were used to explore Ekman dynamics in the western Arctic Ocean for the period 1992-2014. Results show that surface geostrophic currents have been increasing and are much stronger than Ekman layer velocities in recent years (2003-2014) when the oceanic Beaufort Gyre (BG) is spinning up in the region. The new methods that include geostrophic currents result in more realistic Ekman pumping velocities than a previous iterative method that does not consider geostrophic currents and therefore overestimates Ekman pumping velocities by up to 52% in the central area of the BG over the period 2003-2014. When the BG is spinning up as seen in recent years, geostrophic currents become stronger, which tend to modify the ice-ocean stress and moderate the wind-driven Ekman convergence in the Canada Basin. This is a mechanism we have identified to play an important and growing role in stabilizing the Ekman convergence and therefore the BG in recent years. This mechanism may be used to explain three scenarios that describe the interplay of changes in wind forcing, sea ice motion, and geostrophic currents that control the variability of the Ekman dynamics in the central BG during 1992-2014. Results also reveal several upwelling regions in the southern and northern Canada Basin and the Chukchi Abyssal Plain which may play a significant role in physical and biological processes in these regions.

  1. The geostrophic velocity field in shallow water over topography

    Science.gov (United States)

    Charnock, Henry; Killworth, Peter D.

    1998-01-01

    A recent note (Hopkins, T.S., 1996. A note on the geostrophic velocity field referenced to a point. Continental Shelf Research 16, 1621-1630) suggests a method for evaluating absolute pressure gradients in stratified water over topography. We demonstrate that this method requires no along-slope bottom velocity, in contradiction to what is usually observed, and that mass is not conserved.

  2. Bi-tangential hybrid IMRT for sparing the shoulder in whole breast irradiation.

    Science.gov (United States)

    Farace, P; Deidda, M A; Iamundo de Cumis, I; Iamundo de Curtis, I; Deiana, E; Farigu, R; Lay, G; Porru, S

    2013-11-01

    A bi-tangential technique is proposed to reduce undesired doses to the shoulder produced by standard tangential irradiation. A total of 6 patients affected by shoulder pain and reduced functional capacity after whole-breast irradiation were retrospectively analysed. The standard tangential plan used for treatment was compared with (1) a single bi-tangential plan where, to spare the shoulder, the lateral open tangent was split into two half-beams at isocentre, with the superior portion rotated by 10-20° medially with respect to the standard lateral beam; (2) a double bi-tangential plan, where both the tangential open beams were split. The planning target volume (PTV) coverage and the dose to the portion of muscles and axilla included in the standard tangential beams were compared. PTV95 % of standard plan (91.9 ± 3.8) was not significantly different from single bi-tangential plan (91.8 ± 3.4); a small but significant (p < 0.01) decrease was observed with the double bi-tangential plan (90.1 ± 3.7). A marked dose reduction to the muscle was produced by the single bi-tangential plan around 30-40 Gy. The application of the double bi-tangential technique further reduced the volume receiving around 20 Gy, but did not markedly affect the higher doses. The dose to the axilla was reduced both in the single and the double bi-tangential plans. The single bi-tangential technique would have been able to reduce the dose to shoulder and axilla, without compromising target coverage. This simple technique is valuable for irradiation after axillary lymph node dissection or in patients without dissection due to negative or low-volume sentinel lymph node disease.

  3. Bi-tangential hybrid IMRT for sparing the shoulder in whole breast irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Farace, P.; Deidda, M. A.; Iamundo de Curtis, I.; Deiana, E.; Farigu, R.; Lay, G.; Porru, S. [Regional Oncological Hospital, Cagliari (Italy). Dept. of Radio-Oncology

    2013-11-15

    Background and purpose: A bi-tangential technique is proposed to reduce undesired doses to the shoulder produced by standard tangential irradiation. Patients and methods: A total of 6 patients affected by shoulder pain and reduced functional capacity after whole-breast irradiation were retrospectively analysed. The standard tangential plan used for treatment was compared with (1) a single bi-tangential plan where, to spare the shoulder, the lateral open tangent was split into two half-beams at isocentre, with the superior portion rotated by 10-20 medially with respect to the standard lateral beam; (2) a double bi-tangential plan, where both the tangential open beams were split. The planning target volume (PTV) coverage and the dose to the portion of muscles and axilla included in the standard tangential beams were compared. Results: PTV95 % of standard plan (91.9 {+-} 3.8) was not significantly different from single bi-tangential plan (91.8 {+-} 3.4); a small but significant (p < 0.01) decrease was observed with the double bi-tangential plan (90.1 {+-} 3.7). A marked dose reduction to the muscle was produced by the single bi-tangential plan around 30-40 Gy. The application of the double bi-tangential technique further reduced the volume receiving around 20 Gy, but did not markedly affect the higher doses. The dose to the axilla was reduced both in the single and the double bi-tangential plans. Conclusion: The single bi-tangential technique would have been able to reduce the dose to shoulder and axilla, without compromising target coverage. This simple technique is valuable for irradiation after axillary lymph node dissection or in patients without dissection due to negative or low-volume sentinel lymph node disease. (orig.)

  4. Bi-tangential hybrid IMRT for sparing the shoulder in whole breast irradiation

    International Nuclear Information System (INIS)

    Farace, P.; Deidda, M.A.; Iamundo de Curtis, I.; Deiana, E.; Farigu, R.; Lay, G.; Porru, S.

    2013-01-01

    Background and purpose: A bi-tangential technique is proposed to reduce undesired doses to the shoulder produced by standard tangential irradiation. Patients and methods: A total of 6 patients affected by shoulder pain and reduced functional capacity after whole-breast irradiation were retrospectively analysed. The standard tangential plan used for treatment was compared with (1) a single bi-tangential plan where, to spare the shoulder, the lateral open tangent was split into two half-beams at isocentre, with the superior portion rotated by 10-20 medially with respect to the standard lateral beam; (2) a double bi-tangential plan, where both the tangential open beams were split. The planning target volume (PTV) coverage and the dose to the portion of muscles and axilla included in the standard tangential beams were compared. Results: PTV95 % of standard plan (91.9 ± 3.8) was not significantly different from single bi-tangential plan (91.8 ± 3.4); a small but significant (p < 0.01) decrease was observed with the double bi-tangential plan (90.1 ± 3.7). A marked dose reduction to the muscle was produced by the single bi-tangential plan around 30-40 Gy. The application of the double bi-tangential technique further reduced the volume receiving around 20 Gy, but did not markedly affect the higher doses. The dose to the axilla was reduced both in the single and the double bi-tangential plans. Conclusion: The single bi-tangential technique would have been able to reduce the dose to shoulder and axilla, without compromising target coverage. This simple technique is valuable for irradiation after axillary lymph node dissection or in patients without dissection due to negative or low-volume sentinel lymph node disease. (orig.)

  5. Quasi-Geostrophic Diagnosis of Mixed-Layer Dynamics Embedded in a Mesoscale Turbulent Field

    Science.gov (United States)

    Chavanne, C. P.; Klein, P.

    2016-02-01

    A new quasi-geostrophic model has been developed to diagnose the three-dimensional circulation, including the vertical velocity, in the upper ocean from high-resolution observations of sea surface height and buoyancy. The formulation for the adiabatic component departs from the classical surface quasi-geostrophic framework considered before since it takes into account the stratification within the surface mixed-layer that is usually much weaker than that in the ocean interior. To achieve this, the model approximates the ocean with two constant-stratification layers : a finite-thickness surface layer (or the mixed-layer) and an infinitely-deep interior layer. It is shown that the leading-order adiabatic circulation is entirely determined if both the surface streamfunction and buoyancy anomalies are considered. The surface layer further includes a diabatic dynamical contribution. Parameterization of diabatic vertical velocities is based on their restoring impacts of the thermal-wind balance that is perturbed by turbulent vertical mixing of momentum and buoyancy. The model skill in reproducing the three-dimensional circulation in the upper ocean from surface data is checked against the output of a high-resolution primitive-equation numerical simulation. Correlation between simulated and diagnosed vertical velocities are significantly improved in the mixed-layer for the new model compared to the classical surface quasi-geostrophic model, reaching 0.9 near the surface.

  6. Pneumothorax in intensive-care patients: Ranking of tangential views

    International Nuclear Information System (INIS)

    Jantsch, H.; Winkler, M.; Pichler, W.; Mauritz, W.; Lechner, G.; Vienna Univ.

    1990-01-01

    In 55 intensive-care patients an additional tangential view of the chest was taken to demonstrate or exclude a pneumothorax in patients with sudden deterioration of gas exchange and negative ap-chest x-ray, if there was a suspicion of pneumothorax or a confirmed small pneumothorax in the ap-view. In 14 of 42 cases (33.3%) with negative or suspected ap-chest x-ray the tangential view revealed a pneumothorax. 6 of these 14 pneumothoraces were under tension. In 7 out of 11 patients (63.6%) with small pneumothorax, the tangential view showed additionally a tensionpneumothorax. (orig.) [de

  7. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    Science.gov (United States)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  8. Errors of Mean Dynamic Topography and Geostrophic Current Estimates in China's Marginal Seas from GOCE and Satellite Altimetry

    DEFF Research Database (Denmark)

    Jin, Shuanggen; Feng, Guiping; Andersen, Ole Baltazar

    2014-01-01

    and geostrophic current estimates from satellite gravimetry and altimetry are investigated and evaluated in China's marginal seas. The cumulative error in MDT from GOCE is reduced from 22.75 to 9.89 cm when compared to the Gravity Recovery and Climate Experiment (GRACE) gravity field model ITG-Grace2010 results......The Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and satellite altimetry can provide very detailed and accurate estimates of the mean dynamic topography (MDT) and geostrophic currents in China's marginal seas, such as, the newest high-resolution GOCE gravity field model GO......-CONS-GCF-2-TIM-R4 and the new Centre National d'Etudes Spatiales mean sea surface model MSS_CNES_CLS_11 from satellite altimetry. However, errors and uncertainties of MDT and geostrophic current estimates from satellite observations are not generally quantified. In this paper, errors and uncertainties of MDT...

  9. Tangential inlet supersonic separators: a novel apparatus for gas purification

    DEFF Research Database (Denmark)

    Wen, Chuang; Walther, Jens Honore; Yang, Yan

    2016-01-01

    A novel supersonic separator with a tangential inlet is designed to remove the condensable components from gas mixtures. The dynamic parameters of natural gas in the supersonic separation process are numerically calculated using the Reynolds stress turbulence model with the Peng-Robinson real gas...... be generated by the tangential inlet, and it increases to the maximum of 200 m/s at the nozzle throat due to decrease of the nozzle area of the converging part. The tangential velocity can maintain the value of about 160 m/s at the nozzle exit, and correspondingly generates the centrifugal acceleration of 3...

  10. The tangential velocity of M31: CLUES from constrained simulations

    Science.gov (United States)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Courtois, Hélène; Tully, R. Brent

    2016-07-01

    Determining the precise value of the tangential component of the velocity of M31 is a non-trivial astrophysical issue that relies on complicated modelling. This has recently lead to conflicting estimates, obtained by several groups that used different methodologies and assumptions. This Letter addresses the issue by computing a Bayesian posterior distribution function of this quantity, in order to measure the compatibility of those estimates with Λ cold dark matter (ΛCDM). This is achieved using an ensemble of Local Group (LG) look-alikes collected from a set of constrained simulations (CSs) of the local Universe, and a standard unconstrained ΛCDM. The latter allows us to build a control sample of LG-like pairs and to single out the influence of the environment in our results. We find that neither estimate is at odds with ΛCDM; however, whereas CSs favour higher values of vtan, the reverse is true for estimates based on LG samples gathered from unconstrained simulations, overlooking the environmental element.

  11. A Unified Model of Geostrophic Adjustment and Frontogenesis

    Science.gov (United States)

    Taylor, John; Shakespeare, Callum

    2013-11-01

    Fronts, or regions with strong horizontal density gradients, are ubiquitous and dynamically important features of the ocean and atmosphere. In the ocean, fronts are associated with enhanced air-sea fluxes, turbulence, and biological productivity, while atmospheric fronts are associated with some of the most extreme weather events. Here, we describe a new mathematical framework for describing the formation of fronts, or frontogenesis. This framework unifies two classical problems in geophysical fluid dynamics, geostrophic adjustment and strain-driven frontogenesis, and provides a number of important extensions beyond previous efforts. The model solutions closely match numerical simulations during the early stages of frontogenesis, and provide a means to describe the development of turbulence at mature fronts.

  12. Kinematic validation of a quasi-geostrophic model for the fast dynamics in the Earth's outer core

    Science.gov (United States)

    Maffei, S.; Jackson, A.

    2017-09-01

    We derive a quasi-geostrophic (QG) system of equations suitable for the description of the Earth's core dynamics on interannual to decadal timescales. Over these timescales, rotation is assumed to be the dominant force and fluid motions are strongly invariant along the direction parallel to the rotation axis. The diffusion-free, QG system derived here is similar to the one derived in Canet et al. but the projection of the governing equations on the equatorial disc is handled via vertical integration and mass conservation is applied to the velocity field. Here we carefully analyse the properties of the resulting equations and we validate them neglecting the action of the Lorentz force in the momentum equation. We derive a novel analytical solution describing the evolution of the magnetic field under these assumptions in the presence of a purely azimuthal flow and an alternative formulation that allows us to numerically solve the evolution equations with a finite element method. The excellent agreement we found with the analytical solution proves that numerical integration of the QG system is possible and that it preserves important physical properties of the magnetic field. Implementation of magnetic diffusion is also briefly considered.

  13. Large Eddy Simulations of a Bottom Boundary Layer Under a Shallow Geostrophic Front

    Science.gov (United States)

    Bateman, S. P.; Simeonov, J.; Calantoni, J.

    2017-12-01

    The unstratified surf zone and the stratified shelf waters are often separated by dynamic fronts that can strongly impact the character of the Ekman bottom boundary layer. Here, we use large eddy simulations to study the turbulent bottom boundary layer associated with a geostrophic current on a stratified shelf of uniform depth. The simulations are initialized with a spatially uniform vertical shear that is in geostrophic balance with a pressure gradient due to a linear horizontal temperature variation. Superposed on the temperature front is a stable vertical temperature gradient. As turbulence develops near the bottom, the turbulence-induced mixing gradually erodes the initial uniform temperature stratification and a well-mixed layer grows in height until the turbulence becomes fully developed. The simulations provide the spatial distribution of the turbulent dissipation and the Reynolds stresses in the fully developed boundary layer. We vary the initial linear stratification and investigate its effect on the height of the bottom boundary layer and the turbulence statistics. The results are compared to previous models and simulations of stratified bottom Ekman layers.

  14. Sea level anomaly on the Patagonian continental shelf: Trends, annual patterns and geostrophic flows

    Science.gov (United States)

    Saraceno, M.; Piola, A. R.; Strub, P. T.

    2016-01-01

    Abstract We study the annual patterns and linear trend of satellite sea level anomaly (SLA) over the southwest South Atlantic continental shelf (SWACS) between 54ºS and 36ºS. Results show that south of 42°S the thermal steric effect explains nearly 100% of the annual amplitude of the SLA, while north of 42°S it explains less than 60%. This difference is due to the halosteric contribution. The annual wind variability plays a minor role over the whole continental shelf. The temporal linear trend in SLA ranges between 1 and 5 mm/yr (95% confidence level). The largest linear trends are found north of 39°S, at 42°S and at 50°S. We propose that in the northern region the large positive linear trends are associated with local changes in the density field caused by advective effects in response to a southward displacement of the South Atlantic High. The causes of the relative large SLA trends in two southern coastal regions are discussed as a function meridional wind stress and river discharge. Finally, we combined the annual cycle of SLA with the mean dynamic topography to estimate the absolute geostrophic velocities. This approach provides the first comprehensive description of the seasonal component of SWACS circulation based on satellite observations. The general circulation of the SWACS is northeastward with stronger/weaker geostrophic currents in austral summer/winter. At all latitudes, geostrophic velocities are larger (up to 20 cm/s) close to the shelf‐break and decrease toward the coast. This spatio‐temporal pattern is more intense north of 45°S. PMID:27840784

  15. Assessment of Set-up Accuracy in Tangential Breast Treatment Using Electronic Portal Imaging Device

    International Nuclear Information System (INIS)

    Lee, Byung Koo; Kang, Soo Man

    2012-01-01

    The aim of this study was to investigate the setup accuracy for tangential breast treatment patients using electronic portal image and 2-D reconstruction image Twenty two patients undergoing tangential breast treatment. To explore the setup accuracy, distances between chosen landmarks were taken as reference parameters. The difference between measured reference parameters on simulation films and electronic portal images (EPIs) was calculated as the setup error. A total of 22 simulation films and 110 EPIs were evaluated. In the tangential fields, the calculated reference parameters were the central lung distance (CLD), central soft-tissue distance (CSTD), and above lung distance (ALD), below lung distance (BLD). In the medial tangential field, the average difference values for these parameters were 1.0, -6.4, -2.1 and 2.0, respectively; and the values were 1.5, 2.3, 4.1 and 1.1, respectively. In the lateral tangential field, the average difference values for these parameters were -1.5, -4.3, -2.7 and -1.3, respectively; and the values were 3.3, 2.1, 2.9 and 2.5, respectively. CLD, CSTD, ALD and BLD in the tangential fields are easily identifiable and are helpful for detecting setup errors using EPIs in patients undergoing tangential breast radiotherapy treatment.

  16. Assessment of Set-up Accuracy in Tangential Breast Treatment Using Electronic Portal Imaging Device

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byung Koo [Dept. of Radiation Oncology, Korea University Anam Hospital, Seoul (Korea, Republic of); Kang, Soo Man [Dept. of Radiation Oncology, Korea University Gospel Hospital, Seoul (Korea, Republic of)

    2012-09-15

    The aim of this study was to investigate the setup accuracy for tangential breast treatment patients using electronic portal image and 2-D reconstruction image Twenty two patients undergoing tangential breast treatment. To explore the setup accuracy, distances between chosen landmarks were taken as reference parameters. The difference between measured reference parameters on simulation films and electronic portal images (EPIs) was calculated as the setup error. A total of 22 simulation films and 110 EPIs were evaluated. In the tangential fields, the calculated reference parameters were the central lung distance (CLD), central soft-tissue distance (CSTD), and above lung distance (ALD), below lung distance (BLD). In the medial tangential field, the average difference values for these parameters were 1.0, -6.4, -2.1 and 2.0, respectively; and the values were 1.5, 2.3, 4.1 and 1.1, respectively. In the lateral tangential field, the average difference values for these parameters were -1.5, -4.3, -2.7 and -1.3, respectively; and the values were 3.3, 2.1, 2.9 and 2.5, respectively. CLD, CSTD, ALD and BLD in the tangential fields are easily identifiable and are helpful for detecting setup errors using EPIs in patients undergoing tangential breast radiotherapy treatment.

  17. Vibrotactile Compliance Feedback for Tangential Force Interaction.

    Science.gov (United States)

    Heo, Seongkook; Lee, Geehyuk

    2017-01-01

    This paper presents a method to generate a haptic illusion of compliance using a vibrotactile actuator when a tangential force is applied to a rigid surface. The novel method builds on a conceptual compliance model where a physical object moves on a textured surface in response to a tangential force. The method plays vibration patterns simulating friction-induced vibrations as an applied tangential force changes. We built a prototype consisting of a two-dimensional tangential force sensor and a surface transducer to test the effectiveness of the model. Participants in user experiments with the prototype perceived the rigid surface of the prototype as a moving, rubber-like plate. The main findings of the experiments are: 1) the perceived stiffness of a simulated material can be controlled by controlling the force-playback transfer function, 2) its perceptual properties such as softness and pleasantness can be controlled by changing friction grain parameters, and 3) the use of the vibrotactile compliance feedback reduces participants' workload including physical demand and frustration while performing a force repetition task.

  18. Dynamic membrane filtration in tangential flow

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    Oil-containing waste water is produced in many cleaning processes and also on production of compressed air. Dynamic membrane filtration in the tangential flow mode has proved effective in the treatment of these stable emulsions. The possible applications of ceramic membrane filters are illustrated for a variety of examples. (orig.) [de

  19. Variation of Drying Strains between Tangential and Radial Directions in Asian White Birch

    Directory of Open Access Journals (Sweden)

    Zongying Fu

    2016-03-01

    Full Text Available In this study, wood disks of 30 mm in thickness cut from white birch (Betula platyphylla Suk logs were dried at a constant temperature (40 °C. The drying strains including practical shrinkage strain, elastic strain, viscoelastic creep strain and mechano-sorptive creep were measured both tangentially and radially. The effects of moisture content and radial position on each strain were also discussed qualitatively. Overall, the difference of the practical shrinkage strain between the tangential and radial directions was proportional to the distance from the pith. The tangential elastic strain and viscoelastic creep strain were higher than these strains in a radial direction, and they all decreased with the decrease of moisture content. Additionally, there were opposite mechano-sorptive creep between tangential and radial directions.

  20. Conservation Properties of the Hamiltonian Particle-Mesh method for the Quasi-Geostrophic Equations on a sphere

    NARCIS (Netherlands)

    H. Thorsdottir (Halldora)

    2011-01-01

    htmlabstractThe Hamiltonian particle-mesh (HPM) method is used to solve the Quasi-Geostrophic model generalized to a sphere, using the Spherepack modeling package to solve the Helmholtz equation on a colatitude-longitude grid with spherical harmonics. The predicted energy conservation of a

  1. The three-dimensional distributions of tangential velocity and total- temperature in vortex tubes

    DEFF Research Database (Denmark)

    Linderstrøm-Lang, C.U.

    1971-01-01

    The axial and radial gradients of the tangential velocity distribution are calculated from prescribed secondary flow functions on the basis of a zero-order approximation to the momentum equations developed by Lewellen. It is shown that secondary flow functions may be devised which meet pertinent...... physical requirements and which at the same time lead to realistic tangential velocity gradients. The total-temperature distribution in both the axial and radial directions is calculated from such secondary flow functions and corresponding tangential velocity results on the basis of an approximate...

  2. Intensity modulated tangential beam irradiation of the intact breast

    International Nuclear Information System (INIS)

    Hong, L.; Hunt, M.; Chui, C.; Forster, K.; Lee, H.; Lutz, W.; Yahalom, J.; Kutcher, G.J.; McCormick, B.

    1997-01-01

    Purpose/Objective: The purpose of this study was to evaluate the potential benefits of intensity modulated tangential beams in the irradiation of the intact breast. The primary goal was to develop an intensity modulated treatment which would substantially decrease the dose to coronary arteries, lung and contralateral breast while still using a standard tangential beam arrangement. Improved target dose homogeneity, within the limits imposed by opposed fields, was also desired. Since a major goal of the study was the development of a technique which was practical for use on a large population of patients, the design of 'standard' intensity profiles analogous in function to conventional wedges was also investigated. Materials and Methods: Three dimensional treatment planning was performed using both conventional and intensity modulated tangential beams. Plans were developed for both the right and left breast for a range of patient sizes and shapes. For each patient, PTV, lung, heart, origin and peripheral branches of the coronary artery, and contralateral breast were contoured. Optimum tangential beam direction and shape were designed using Beams-Eye-View display and then used for both the conventional and intensity modulated plans. For the conventional plan, the optimum wedge combination and beam weighting were chosen based on the dose distribution in a single transverse plane through the field center. Intensity modulated plans were designed using an algorithm which allows the user to specify the prescribed, maximum and minimum acceptable doses and dose volume constraints for each organ of interest. Plans were compared using multiple dose distributions and DVHs. Results: Significant improvements in the doses to critical structures were achieved using the intensity modulated plan. Coronary artery dose decreased substantially for patients treated to the left breast. Ipsilateral lung and contralateral breast doses decreased for all patients. For one patient treated to

  3. Dependence of the wind climate of Ireland on the direction distribution of geostrophic wind; Die Abhaengigkeit des Windklimas von Irland von der Richtungsverteilung des geostrophischen Windes

    Energy Technology Data Exchange (ETDEWEB)

    Frank, H.P. [Forskningcenter Risoe, Roskilde (Denmark). Afdelingen for Vindenergi og Atmosfaerefysik

    1998-01-01

    The wind climate of Ireland is calculated using the Karlsruhe Atmospheric Mesoscale Model KAMM. The dependence of the simulated wind energy on the direction distribution of geostrophic wind is studied. As geostrophic winds from the south-west are most frequent, sites on the north-west coast are particularly suited for wind power stations. In addition, geostrophic wind increases from the south-east to the north-west. (orig.) [Deutsch] Das Windklima von Irland wurde mit dem Karlsruher Atmosphaerischen Mesoskaligen Modell KAMM berechnet. Hier wird die Abhaengigkeit der simultierten Windenergie von der Richtungsverteilung des geostrophischen Windes untersucht. Da geostrophische Winde aus Suedwest am haeufigsten vorkommen, eignet sich besonders die Nordwestkueste als Standort fuer Windkraftanlagen. Zusaetzlich nimmt auch der mittlere geostrophische Wind von Suedost nach Nordwest zu. (orig.)

  4. Effects of magnetic drift tangential to magnetic surfaces on neoclassical transport in non-axisymmetric plasmas

    International Nuclear Information System (INIS)

    Matsuoka, Seikichi; Satake, Shinsuke; Kanno, Ryutaro; Sugama, Hideo

    2015-01-01

    In evaluating neoclassical transport by radially local simulations, the magnetic drift tangential to a flux surface is usually ignored in order to keep the phase-space volume conservation. In this paper, effect of the tangential magnetic drift on the local neoclassical transport is investigated. To retain the effect of the tangential magnetic drift in the local treatment of neoclassical transport, a new local formulation for the drift kinetic simulation is developed. The compressibility of the phase-space volume caused by the tangential magnetic drift is regarded as a source term for the drift kinetic equation, which is solved by using a two-weight δf Monte Carlo method for non-Hamiltonian system [G. Hu and J. A. Krommes, Phys. Plasmas 1, 863 (1994)]. It is demonstrated that the effect of the drift is negligible for the neoclassical transport in tokamaks. In non-axisymmetric systems, however, the tangential magnetic drift substantially changes the dependence of the neoclassical transport on the radial electric field E r . The peaked behavior of the neoclassical radial fluxes around E r  =   0 observed in conventional local neoclassical transport simulations is removed by taking the tangential magnetic drift into account

  5. Effects of magnetic drift tangential to magnetic surfaces on neoclassical transport in non-axisymmetric plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Matsuoka, Seikichi, E-mail: matsuoka@rist.or.jp [Research Organization for Information Science and Technology, 6F Kimec-Center Build., 1-5-2 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047 (Japan); Satake, Shinsuke; Kanno, Ryutaro [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292 (Japan); Department of Fusion Science, SOKENDAI (The Graduate University for Advanced Studies), 322-6 Oroshi-cho, Toki 509-5292 (Japan); Sugama, Hideo [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292 (Japan)

    2015-07-15

    In evaluating neoclassical transport by radially local simulations, the magnetic drift tangential to a flux surface is usually ignored in order to keep the phase-space volume conservation. In this paper, effect of the tangential magnetic drift on the local neoclassical transport is investigated. To retain the effect of the tangential magnetic drift in the local treatment of neoclassical transport, a new local formulation for the drift kinetic simulation is developed. The compressibility of the phase-space volume caused by the tangential magnetic drift is regarded as a source term for the drift kinetic equation, which is solved by using a two-weight δf Monte Carlo method for non-Hamiltonian system [G. Hu and J. A. Krommes, Phys. Plasmas 1, 863 (1994)]. It is demonstrated that the effect of the drift is negligible for the neoclassical transport in tokamaks. In non-axisymmetric systems, however, the tangential magnetic drift substantially changes the dependence of the neoclassical transport on the radial electric field E{sub r}. The peaked behavior of the neoclassical radial fluxes around E{sub r }={sub  }0 observed in conventional local neoclassical transport simulations is removed by taking the tangential magnetic drift into account.

  6. Tangential channel for nuclear gamma-resonance spectroscopy in thermal neutron capture

    International Nuclear Information System (INIS)

    Belogurov, V.N.; Bondars, H.Ya.; Lapenas, A.A.; Reznikov, R.S.; Senkov, P.E.

    1979-01-01

    Design of a tangential reactor channel which has been made to replace the radial one in the pulsed research reactor IRT-2000 is described. It allows to use the same hole in biological reactor schielding. Characteristics of neutron and gamma-background spectra at the excit of the channel are given and compared with analogous characteristics of the radial one. The gamma background in the tangential channel is lower than in the radial channel. The gamma spectra in the Gd 155 (n, γ)Gd 156 , Gd 157 (n, γ)Gd 158 , Er 167 (n, γ)Er 168 and Hf 177 (n, γ)Hf 178 reactions show that the application of X-ray detection units BDR with the tangential channel allows to carry out the gamma spectrometry of gamma quanta emitted in the thermal neutron capture by both high and low neutron capture cross section nuclei (e.g., Gdsup(157, 155) and Er 167 , Hf 177 , respectively)

  7. Incorporating geostrophic wind information for improved space–time short-term wind speed forecasting

    KAUST Repository

    Zhu, Xinxin

    2014-09-01

    Accurate short-term wind speed forecasting is needed for the rapid development and efficient operation of wind energy resources. This is, however, a very challenging problem. Although on the large scale, the wind speed is related to atmospheric pressure, temperature, and other meteorological variables, no improvement in forecasting accuracy was found by incorporating air pressure and temperature directly into an advanced space-time statistical forecasting model, the trigonometric direction diurnal (TDD) model. This paper proposes to incorporate the geostrophic wind as a new predictor in the TDD model. The geostrophic wind captures the physical relationship between wind and pressure through the observed approximate balance between the pressure gradient force and the Coriolis acceleration due to the Earth’s rotation. Based on our numerical experiments with data from West Texas, our new method produces more accurate forecasts than does the TDD model using air pressure and temperature for 1to 6-hour-ahead forecasts based on three different evaluation criteria. Furthermore, forecasting errors can be further reduced by using moving average hourly wind speeds to fit the diurnal pattern. For example, our new method obtains between 13.9% and 22.4% overall mean absolute error reduction relative to persistence in 2-hour-ahead forecasts, and between 5.3% and 8.2% reduction relative to the best previous space-time methods in this setting.

  8. Absence of Tangentially Migrating Glutamatergic Neurons in the Developing Avian Brain

    Directory of Open Access Journals (Sweden)

    Fernando García-Moreno

    2018-01-01

    Full Text Available Summary: Several neuronal populations orchestrate neocortical development during mammalian embryogenesis. These include the glutamatergic subplate-, Cajal-Retzius-, and ventral pallium-derived populations, which coordinate cortical wiring, migration, and proliferation, respectively. These transient populations are primarily derived from other non-cortical pallial sources that migrate to the dorsal pallium. Are these migrations to the dorsal pallium conserved in amniotes or are they specific to mammals? Using in ovo electroporation, we traced the entire lineage of defined chick telencephalic progenitors. We found that several pallial sources that produce tangential migratory neurons in mammals only produced radially migrating neurons in the avian brain. Moreover, ectopic expression of VP-specific mammalian Dbx1 in avian brains altered neurogenesis but did not convert the migration into a mammal-like tangential movement. Together, these data indicate that tangential cellular contributions of glutamatergic neurons originate from outside the dorsal pallium and that pallial Dbx1 expression may underlie the generation of the mammalian neocortex during evolution. : Neocortical formation crucially depends on the early tangential arrival of several transient glutamatergic neuronal populations. García-Moreno et al. find that these neuronal migrations are absent in the developing brain of chicks. The mammalian uniqueness of these developing migrations suggests a crucial role of these cells in the evolutionary origin of the neocortex. Keywords: neocortex, chick, pallium, ventral pallium, evo-devo, evolution, Dbx1, telencephalon

  9. Toward an extended-geostrophic Euler-Poincare model for mesoscale oceanographic flow

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.S.; Newberger, P.A. [Oregon State Univ., Corvallis, OR (United States). Coll. of Oceanic and Atmospheric Sciences; Holm, D.D. [Los Alamos National Lab., NM (United States)

    1998-07-01

    The authors consider the motion of a rotating, continuously stratified fluid governed by the hydrostatic primitive equations (PE). An approximate Hamiltonian (L1) model for small Rossby number {var_epsilon} is derived for application to mesoscale oceanographic flow problems. Numerical experiments involving a baroclinically unstable oceanic jet are utilized to assess the accuracy of the L1 model compared to the PE and to other approximate models, such as the quasigeostrophic (QG) and the geostrophic momentum (GM) equations. The results of the numerical experiments for moderate Rossby number flow show that the L1 model gives accurate solutions with errors substantially smaller than QG or GM.

  10. Hydromagnetic quasi-geostrophic modes in rapidly rotating planetary cores

    DEFF Research Database (Denmark)

    Canet, E.; Finlay, Chris; Fournier, A.

    2014-01-01

    The core of a terrestrial-type planet consists of a spherical shell of rapidly rotating, electrically conducting, fluid. Such a body supports two distinct classes of quasi-geostrophic (QG) eigenmodes: fast, primarily hydrodynamic, inertial modes with period related to the rotation time scale...... decreases toward the outer boundary in a spherical shell, QG modes tend to be compressed towards the outer boundary. Including magnetic dissipation, we find a continuous transition from diffusionless slow magnetic modes into quasi-free decay magnetic modes. During that transition (which is controlled......, or shorter than, their oscillation time scale.Based on our analysis, we expect Mercury to be in a regime where the slow magnetic modes are of quasi-free decay type. Earth and possibly Ganymede, with their larger Elsasser numbers, may possess slow modes that are in the transition regime of weak diffusion...

  11. Design of tangential viewing phase contrast imaging for turbulence measurements in JT-60SA

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, K., E-mail: ktanaka@nifs.ac.jp [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Department of Advanced Energy Engineering, Kyushu University, Kasuga, Fukuoka 816-8580 (Japan); Coda, S. [EPFL–SPC, Lausanne (Switzerland); Yoshida, M.; Sasao, H.; Kawano, Y.; Imazawa, R.; Kubo, H.; Kamada, Y. [National Institutes for Quantum and Radiological Science and Technology, Naka, Ibaraki 311-0193 (Japan)

    2016-11-15

    A tangential viewing phase contrast imaging system is being designed for the JT-60SA tokamak to investigate microturbulence. In order to obtain localized information on the turbulence, a spatial-filtering technique is applied, based on magnetic shearing. The tangential viewing geometry enhances the radial localization. The probing laser beam is injected tangentially and traverses the entire plasma region including both low and high field sides. The spatial resolution for an Internal Transport Barrier discharge is estimated at 30%–70% of the minor radius at k = 5 cm{sup −1}, which is the typical expected wave number of ion scale turbulence such as ion temperature gradient/trapped electron mode.

  12. Large-Scale Flows and Magnetic Fields Produced by Rotating Convection in a Quasi-Geostrophic Model of Planetary Cores

    Science.gov (United States)

    Guervilly, C.; Cardin, P.

    2017-12-01

    Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.

  13. Accuracy in tangential breast treatment set-up

    International Nuclear Information System (INIS)

    Tienhoven, G. van; Lanson, J.H.; Crabeels, D.; Heukelom, S.; Mijnheer, B.J.

    1991-01-01

    To test accuracy and reproducibility of tangential breast treatment set-up used in The Netherlands Cancer Institute, a portal imaging study was performed in 12 patients treated for early stage breast cancer. With an on-line electronic portal imaging device (EPID) images were obtained of each patient in several fractions and compared with simulator films and with each other. In 5 patients multiple images (on the average 7) per fraction were obtained to evaluate set-up variations due to respiratory movement. The central lung distance (CLD) and other set-up parameters varied within 1 fraction about 1mm (1SD). The average variation of these parameters between various fractions was about 2 mm (1SD). The differences between simulator and treatment set-up over all patients and all fractions was on the average 2-3mm for the central beam edge to skin distance and CLD. It can be concluded that the tangential breast treatment set-up is very stable and reproducible and that respiration does not have a significant influence on treatment volume. EPID appears to be an adequate tool for studies of treatment set-up accuracy like this. (author). 35 refs.; 2 figs.; 3 tabs

  14. Mapping sub-surface geostrophic currents from altimetry and a fleet of gliders

    Science.gov (United States)

    Alvarez, A.; Chiggiato, J.; Schroeder, K.

    2013-04-01

    Integrating the observations gathered by different platforms into a unique physical picture of the environment is a fundamental aspect of networked ocean observing systems. These are constituted by a spatially distributed set of sensors and platforms that simultaneously monitor a given ocean region. Remote sensing from satellites is an integral part of present ocean observing systems. Due to their autonomy, mobility and controllability, underwater gliders are envisioned to play a significant role in the development of networked ocean observatories. Exploiting synergism between remote sensing and underwater gliders is expected to result on a better characterization of the marine environment than using these observational sources individually. This study investigates a methodology to estimate the three dimensional distribution of geostrophic currents resulting from merging satellite altimetry and in situ samples gathered by a fleet of Slocum gliders. Specifically, the approach computes the volumetric or three dimensional distribution of absolute dynamic height (ADH) that minimizes the total energy of the system while being close to in situ observations and matching the absolute dynamic topography (ADT) observed from satellite at the sea surface. A three dimensional finite element technique is employed to solve the minimization problem. The methodology is validated making use of the dataset collected during the field experiment called Rapid Environmental Picture-2010 (REP-10) carried out by the NATO Undersea Research Center-NURC during August 2010. A marine region off-shore La Spezia (northwest coast of Italy) was sampled by a fleet of three coastal Slocum gliders. Results indicate that the geostrophic current field estimated from gliders and altimetry significantly improves the estimates obtained using only the data gathered by the glider fleet.

  15. The irradiation lung volume in tangential fields for the treatment of a breast

    International Nuclear Information System (INIS)

    Oh, Y. T.; Kim, J. R.; Kang, H. J.; Sohn, J. H.; Kang, S. H.; Chun, M. S.

    1997-01-01

    Radiation pneumonitis is one of the complications caused by radiation therapy that includes a portion of the lung tissue. The severity of radiation induced pulmonary dysfunction depends on the irradiated lung volume, total dose, dose rate and underlying pulmonary function. The lung volume was measured for 25 patients with breast cancer irradiated with tangential field from Jan. 1995 to Aug. 1996. Parameters that can predict the irradiated lung volume included; (1) the perpendicular distance from the posterior tangential edge to the posterior part of the anterior chest wall at the center of the field (CLD); (2) the maximum perpendicular distance from the posterior tangential field edge to the posterior part of the anterior chest wall (MLD); (3) the greatest perpendicular distance from the posterior tangential edge to the posterior part of anterior chest wall on CT image at the center of the longitudinal field (GPD); (4) the length of the longitudinal field (L). The irradiated lung volume(RV), the entire both lung volume(EV) and the ipsilateral lung volume(IV) were measured using dose volume histogram. The RV is 61-279cc, the RV/EV is 2.9-13.0% and the RV/EN is 4.9-29.6%. The CLD, the MLD and the GPD are 1.9-3.3cm and 1.4-3.1cm respectively. The significant relations between the irradiated lung volume such as RV, RV/EV, RV/IV and parameters such as CLD, MLD, GPD, L, CLD x L, MLD x L and GPD x L are not found with little variances in parameters. The RV/IV of the left breast irradiation is significances. The significant relationship between the irradiated lung volume and predictors is not found with little variation on parameters. The irradiated lung volume in the tangential field is less than 10% of entire lung volume when CLD is less than 3cm. The RV/IV of the left tangential field is larger than that of the right but there was no significant differences in RV/EVs. Symptomatic radiation pneumonitis has not occurred during minimum 6 months follow up. (author)

  16. Towards Noncommutative Topological Quantum Field Theory: Tangential Hodge-Witten cohomology

    International Nuclear Information System (INIS)

    Zois, I P

    2014-01-01

    Some years ago we initiated a program to define Noncommutative Topological Quantum Field Theory (see [1]). The motivation came both from physics and mathematics: On the one hand, as far as physics is concerned, following the well-known holography principle of 't Hooft (which in turn appears essentially as a generalisation of the Hawking formula for black hole entropy), quantum gravity should be a topological quantum field theory. On the other hand as far as mathematics is concerned, the motivation came from the idea to replace the moduli space of flat connections with the Gabai moduli space of codim-1 taut foliations for 3 dim manifolds. In most cases the later is finite and much better behaved and one might use it to define some version of Donaldson-Floer homology which, hopefully, would be easier to compute. The use of foliations brings noncommutative geometry techniques immediately into the game. The basic tools are two: Cyclic cohomology of the corresponding foliation C*-algebra and the so called ''tangential cohomology'' of the foliation. A necessary step towards this goal is to develop some sort of Hodge theory both for cyclic (and Hochschild) cohomology and for tangential cohomology. Here we present a method to develop a Hodge theory for tangential cohomology of foliations by mimicing Witten's approach to ordinary Morse theory by perturbations of the Laplacian

  17. Nudging Satellite Altimeter Data Into Quasi-Geostrophic Ocean Models

    Science.gov (United States)

    Verron, Jacques

    1992-05-01

    This paper discusses the efficiency of several variants of the nudging technique (derived from the technique of the same name developed by meteorologists) for assimilating altimeter data into numerical ocean models based on quasi-geostrophic formulation. Assimilation experiments are performed with data simulated in the nominal sampling conditions of the Topex-Poseidon satellite mission. Under experimental conditions it is found that nudging on the altimetric sea level is as efficient as nudging on the vorticity (second derivative in space of the dynamic topography), the technique used thus far in studies of this type. The use of altimetric residuals only, instead of the total altimetric sea level signal, is also explored. The critical importance of having an adequate reference mean sea level is largely confirmed. Finally, the possibility of nudging only the signal of sea level tendency (i.e., the successive time differences of the sea level height) is examined. Apart from the barotropic mode, results are not very successful compared with those obtained by assimilating the residuals.

  18. Synoptic Monthly Gridded WOD Absolute Geostrophic Velocity (SMG-WOD-V) (January 1945 - December 2014) with the P-Vector Method (NCEI Accession 0146195)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The SMG-WOD-V dataset comprises synoptic monthly global gridded fields of absolute geostrophic velocity inverted from the synoptic monthly gridded WOD temperature...

  19. Arctic-Mid-Latitude Linkages in a Nonlinear Quasi-Geostrophic Atmospheric Model

    Directory of Open Access Journals (Sweden)

    Dörthe Handorf

    2017-01-01

    Full Text Available A quasi-geostrophic three-level T63 model of the wintertime atmospheric circulation of the Northern Hemisphere has been applied to investigate the impact of Arctic amplification (increase in surface air temperatures and loss of Arctic sea ice during the last 15 years on the mid-latitude large-scale atmospheric circulation. The model demonstrates a mid-latitude response to an Arctic diabatic heating anomaly. A clear shift towards a negative phase of the Arctic Oscillation (AO− during low sea-ice-cover conditions occurs, connected with weakening of mid-latitude westerlies over the Atlantic and colder winters over Northern Eurasia. Compared to reanalysis data, there is no clear model response with respect to the Pacific Ocean and North America.

  20. Multiple zonal jets and convective heat transport barriers in a quasi-geostrophic model of planetary cores

    Science.gov (United States)

    Guervilly, C.; Cardin, P.

    2017-10-01

    We study rapidly rotating Boussinesq convection driven by internal heating in a full sphere. We use a numerical model based on the quasi-geostrophic approximation for the velocity field, whereas the temperature field is 3-D. This approximation allows us to perform simulations for Ekman numbers down to 10-8, Prandtl numbers relevant for liquid metals (˜10-1) and Reynolds numbers up to 3 × 104. Persistent zonal flows composed of multiple jets form as a result of the mixing of potential vorticity. For the largest Rayleigh numbers computed, the zonal velocity is larger than the convective velocity despite the presence of boundary friction. The convective structures and the zonal jets widen when the thermal forcing increases. Prograde and retrograde zonal jets are dynamically different: in the prograde jets (which correspond to weak potential vorticity gradients) the convection transports heat efficiently and the mean temperature tends to be homogenized; by contrast, in the cores of the retrograde jets (which correspond to steep gradients of potential vorticity) the dynamics is dominated by the propagation of Rossby waves, resulting in the formation of steep mean temperature gradients and the dominance of conduction in the heat transfer process. Consequently, in quasi-geostrophic systems, the width of the retrograde zonal jets controls the efficiency of the heat transfer.

  1. Kalker's algorithm Fastsim solves tangential contact problems with slip-dependent friction and friction anisotropy

    Science.gov (United States)

    Piotrowski, J.

    2010-07-01

    This paper presents two extensions of Kalker's algorithm Fastsim of the simplified theory of rolling contact. The first extension is for solving tangential contact problems with the coefficient of friction depending on slip velocity. Two friction laws have been considered: with and without recuperation of the static friction. According to the tribological hypothesis for metallic bodies shear failure, the friction law without recuperation of static friction is more suitable for wheel and rail than the other one. Sample results present local quantities inside the contact area (division to slip and adhesion, traction) as well as global ones (creep forces as functions of creepages and rolling velocity). For the coefficient of friction diminishing with slip, the creep forces decay after reaching the maximum and they depend on the rolling velocity. The second extension is for solving tangential contact problems with friction anisotropy characterised by a convex set of the permissible tangential tractions. The effect of the anisotropy has been shown on examples of rolling without spin and in the presence of pure spin for the elliptical set. The friction anisotropy influences tangential tractions and creep forces. Sample results present local and global quantities. Both extensions have been described with the same language of formulation and they may be merged into one, joint algorithm.

  2. Generalization of the quasi-geostrophic Eliassen-Palm flux to include eddy forcing of condensation heating

    Science.gov (United States)

    Stone, P. H.; Salustri, G.

    1984-01-01

    A modified Eulerian form of the Eliassen-Palm flux which includes the effect of eddy forcing on condensation heating is defined. With the two-dimensional vector flux in the meridional plane which is a function of the zonal mean eddy fluxes replaced by the modified flux, both the Eliassen-Palm theorem and a modified but more general form of the nonacceleration theorem for quasi-geostrophic motion still hold. Calculations of the divergence of the modified flux and of the eddy forcing of the moisture field are presented.

  3. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  4. Comparison of histologic margin status in low-grade cutaneous and subcutaneous canine mast cell tumours examined by radial and tangential sections.

    Science.gov (United States)

    Dores, C B; Milovancev, M; Russell, D S

    2018-03-01

    Radial sections are widely used to estimate adequacy of excision in canine cutaneous mast cell tumours (MCTs); however, this sectioning technique estimates only a small fraction of total margin circumference. This study aimed to compare histologic margin status in grade II/low grade MCTs sectioned using both radial and tangential sectioning techniques. A total of 43 circumferential margins were evaluated from 21 different tumours. Margins were first sectioned radially, followed by tangential sections. Tissues were examined by routine histopathology. Tangential margin status differed in 10 of 43 (23.3%) margins compared with their initial status on radial section. Of 39 margins, 9 (23.1%) categorized as histologic tumour-free margin (HTFM) >0 mm were positive on tangential sectioning. Tangential sections detected a significantly higher proportion of positive margins relative to radial sections (exact 2-tailed P-value = .0215). The HTFM was significantly longer in negative tangential margins than positive tangential margins (mean 10.1 vs 3.2 mm; P = .0008). A receiver operating characteristic curve comparing HTFM and tangentially negative margins found an area under the curve of 0.83 (95% confidence interval: 0.71-0.96). Although correct classification peaked at the sixth cut-point of HTFM ≥1 mm, radial sections still incorrectly classified 50% of margins as lacking tumour cells. Radial sections had 100% specificity for predicting negative tangential margins at a cut-point of 10.9 mm. These data indicate that for low grade MCTs, HTFMs >0 mm should not be considered completely excised, particularly when HTFM is <10.9 mm. This will inform future studies that use HTFM and overall excisional status as dependent variables in multivariable prognostic models. © 2017 John Wiley & Sons Ltd.

  5. Uncertainties in estimating heart doses from 2D-tangential breast cancer radiotherapy

    DEFF Research Database (Denmark)

    Laugaard Lorenzen, Ebbe; Brink, Carsten; Taylor, Carolyn W.

    2016-01-01

    BACKGROUND AND PURPOSE: We evaluated the accuracy of three methods of estimating radiation dose to the heart from two-dimensional tangential radiotherapy for breast cancer, as used in Denmark during 1982-2002. MATERIAL AND METHODS: Three tangential radiotherapy regimens were reconstructed using CT......-based planning scans for 40 patients with left-sided and 10 with right-sided breast cancer. Setup errors and organ motion were simulated using estimated uncertainties. For left-sided patients, mean heart dose was related to maximum heart distance in the medial field. RESULTS: For left-sided breast cancer, mean...... to the uncertainty of estimates based on individual CT-scans. For right-sided breast cancer patients, mean heart dose based on individual CT-scans was always

  6. Reduction of NOx emission in tangential fired - furnace by changing the, mode of operation

    International Nuclear Information System (INIS)

    Chudnovsky, B.; Talanker, A.; Levin, L.; Kahana, S

    1998-01-01

    The present work analyses tile results of tests on 575 MW units with tangential firing furnace arrangement in sub-stoichiometric combustion. Tangential firing provides good conditions for implementing sub-stoichiometric combustion owing to the delivery scheme of pulverized coal and air. The furnace was tested in several different modes of operation (Over Fire Air, Bunkers Out Of Service, Excess air, Tilt etc.) to achieve low cost NOx reduction. Actual performance data are presented based on experiments made on lEC's boiler in M.D. 'B' power station

  7. Tangential Biopsy Thickness versus Lesion Depth in Longitudinal Melanonychia: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Nilton Di Chiacchio

    2012-01-01

    Full Text Available Longitudinal melanonychia can be caused by melanocyte activation (hypermelanosis or proliferation (lentigo, nevus or melanoma. Histopathologic examination is mandatory for suspicious cases of melanomas. Tangential biopsy of the matrix is an elegant technique avoiding nail plate dystrophy, but it was unknown whether the depth of the sample obtained by this method is adequate for histopathologic diagnosis. Twenty-two patients with longitudinal melanonychia striata were submitted to tangential matrix biopsies described by Haneke. The tissue was stained with hematoxylin-eosin and the specimens were measured at 3 distinct points according to the total thickness: largest (A, intermediate (B and narrowest (C then divided into 4 groups according to the histopathologic diagnosis (G1: hypermelanosis; G2: lentigos; G3: nevus; G4: melanoma. The lesions were measured using the same method. The mean specimen/lesion thickness measure values for each group was: G1: 0,59/0,10 mm, G2: 0,67/0,08 mm, G3: 0,52/0,05 mm, G4: 0,58/0,10 mm. The general average thickness for all the specimens/lesions was 0,59/0,08 mm. We concluded that the tangential excision, for longitudinal melanonychia, provides an adequate material for histopathological diagnosis.

  8. The quality assessment of radial and tangential neutron radiography beamlines of TRR

    International Nuclear Information System (INIS)

    Dastjerdi, M.H. Choopan; Movafeghi, A.; Khalafi, H.; Kasesaz, Y.

    2017-01-01

    To achieve a quality neutron radiographic image in a relatively short exposure time, the neutron radiography beam must be of good quality and relatively high neutron flux. Characterization of a neutron radiography beam, such as determination of the image quality and the neutron flux, is vital for producing quality radiographic images and also provides a means to compare the quality of different neutron radiography facilities. This paper provides a characterization of the radial and tangential neutron radiography beamlines at the Tehran research reactor. This work includes determination of the facilities category according to the American Society for Testing and Materials (ASTM) standards, and also uses the gold foils to determine the neutron beam flux. The radial neutron beam is a Category I neutron radiography facility, the highest possible quality level according to the ASTM. The tangential beam is a Category IV neutron radiography facility. Gold foil activation experiments show that the measured neutron flux for radial beamline with length-to-diameter ratio (L/D) =150 is 6.1× 10 6 n cm −2 s −1 and for tangential beamline with (L/D)=115 is 2.4× 10 4 n cm −2 s −1 .

  9. Analysis of residual swirl in tangentially-fired natural gas-boiler

    International Nuclear Information System (INIS)

    Hasril Hasini; Muhammad Azlan Muad; Mohd Zamri Yusoff; Norshah Hafeez Shuaib

    2010-01-01

    This paper describes the investigation on residual swirl flow in a 120 MW natural gas, full-scale, tangential-fired boiler. Emphasis is given towards the understanding of the behavior of the combustion gas flow pattern and temperature distribution as a result of the tangential firing system of the boiler. The analysis was carried out based on three-dimensional computational modeling on full scale boiler with validation from key design parameter as well as practical observation. Actual operating parameters of the actual boiler are taken as the boundary conditions for this modeling. The prediction of total heat flux was found to be in agreement with the key design parameter while the residual swirl predicted at the upper furnace agrees qualitatively with the practical observation. Based on this comparison, detail analysis was carried out for comprehensive understanding on the generation and destruction of the residual swirl behavior in boiler especially those with high capacity. (author)

  10. Wall Thickness Measurement Of Insulated Pipe By Tangential Radiography Technique Using Ir 192

    International Nuclear Information System (INIS)

    Soedarjo

    2000-01-01

    Insulation pipe wall thickness by tangential radiography technique has been carried out using 41 Curie Iridium 192 source has activity for two carbon steel pipes. The outer diameter of the first pipe is 90 mm, wall thickness is 75.0 mm, source film film distance is 609.5 mm, source tangential point of insulation is 489.5 mm and exposure time 3 minute and 25 second. From the calculation, the first pipe thickness is found to be 12.54 mm and for the second pipe is 8.42 mm. The thickness is due to inaccuracy in reading the pipe thickness on radiography film and the geometry distortion radiation path

  11. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  12. Combined tangential-normal vector elements for computing electric and magnetic fields

    International Nuclear Information System (INIS)

    Sachdev, S.; Cendes, Z.J.

    1993-01-01

    A direct method for computing electric and magnetic fields in two dimensions is developed. This method determines both the fields and fluxes directly from Maxwell's curl and divergence equations without introducing potential functions. This allows both the curl and the divergence of the field to be set independently in all elements. The technique is based on a new type of vector finite element that simultaneously interpolates to the tangential component of the electric or the magnetic field and the normal component of the electric or magnetic flux. Continuity conditions are imposed across element edges simply by setting like variables to be the same across element edges. This guarantees the continuity of the field and flux at the mid-point of each edge and that for all edges the average value of the tangential component of the field and of the normal component of the flux is identical

  13. Intensity-modulated tangential beam irradiation of the intact breast

    International Nuclear Information System (INIS)

    Hong, L.; Hunt, M.; Chui, C.; Spirou, S.; Forster, K.; Lee, H.; Yahalom, J.; Kutcher, G.J.; McCormick, B.

    1999-01-01

    Purpose: To evaluate the potential benefits of intensity modulated tangential beams in the irradiation of the intact breast. Methods and Materials: Three-dimensional treatment planning was performed on five left and five right breasts using standard wedged and intensity modulated (IM) tangential beams. Optimal beam parameters were chosen using beams-eye-view display. For the standard plans, the optimal wedge angles were chosen based on dose distributions in the central plane calculated without inhomogeneity corrections, according to our standard protocol. Intensity-modulated plans were generated using an inverse planning algorithm and a standard set of target and critical structure optimization criteria. Plans were compared using multiple dose distributions and dose volume histograms for the planning target volume (PTV), ipsilateral lung, coronary arteries, and contralateral breast. Results: Significant improvements in the doses to critical structures were achieved using intensity modulation. Compared with a standard-wedged plan prescribed to 46 Gy, the dose from the IM plan encompassing 20% of the coronary artery region decreased by 25% (from 36 to 27 Gy) for patients treated to the left breast; the mean dose to the contralateral breast decreased by 42% (from 1.2 to 0.7 Gy); the ipsilateral lung volume receiving more than 46 Gy decreased by 30% (from 10% to 7%); the volume of surrounding soft tissue receiving more than 46 Gy decreased by 31% (from 48% to 33%). Dose homogeneity within the target volume improved greatest in the superior and inferior regions of the breast (approximately 8%), although some decrease in the medial and lateral high-dose regions (approximately 4%) was also observed. Conclusion: Intensity modulation with a standard tangential beam arrangement significantly reduces the dose to the coronary arteries, ipsilateral lung, contralateral breast, and surrounding soft tissues. Improvements in dose homogeneity throughout the target volume can also be

  14. GABA regulates the multidirectional tangential migration of GABAergic interneurons in living neonatal mice.

    Directory of Open Access Journals (Sweden)

    Hiroyuki Inada

    Full Text Available Cortical GABAergic interneurons originate from ganglionic eminences and tangentially migrate into the cortical plate at early developmental stages. To elucidate the characteristics of this migration of GABAergic interneurons in living animals, we established an experimental design specialized for in vivo time-lapse imaging of the neocortex of neonate mice with two-photon laser-scanning microscopy. In vesicular GABA/glycine transporter (VGAT-Venus transgenic mice from birth (P0 through P3, we observed multidirectional tangential migration of genetically-defined GABAergic interneurons in the neocortical marginal zone. The properties of this migration, such as the motility rate (distance/hr, the direction moved, and the proportion of migrating neurons to stationary neurons, did not change through P0 to P3, although the density of GABAergic neurons at the marginal zone decreased with age. Thus, the characteristics of the tangential motility of individual GABAergic neurons remained constant in development. Pharmacological block of GABA(A receptors and of the Na⁺-K⁺-Cl⁻ cotransporters, and chelating intracellular Ca²⁺, all significantly reduced the motility rate in vivo. The motility rate and GABA content within the cortex of neonatal VGAT-Venus transgenic mice were significantly greater than those of GAD67-GFP knock-in mice, suggesting that extracellular GABA concentration could facilitate the multidirectional tangential migration. Indeed, diazepam applied to GAD67-GFP mice increased the motility rate substantially. In an in vitro neocortical slice preparation, we confirmed that GABA induced a NKCC sensitive depolarization of GABAergic interneurons in VGAT-Venus mice at P0-P3. Thus, activation of GABA(AR by ambient GABA depolarizes GABAergic interneurons, leading to an acceleration of their multidirectional motility in vivo.

  15. A vectorized Poisson solver over a spherical shell and its application to the quasi-geostrophic omega-equation

    Science.gov (United States)

    Mullenmeister, Paul

    1988-01-01

    The quasi-geostrophic omega-equation in flux form is developed as an example of a Poisson problem over a spherical shell. Solutions of this equation are obtained by applying a two-parameter Chebyshev solver in vector layout for CDC 200 series computers. The performance of this vectorized algorithm greatly exceeds the performance of its scalar analog. The algorithm generates solutions of the omega-equation which are compared with the omega fields calculated with the aid of the mass continuity equation.

  16. Impact of setup variability on incidental lung irradiation during tangential breast treatment

    International Nuclear Information System (INIS)

    Carter, D.C.; Marks, L.B.; Bentel, G.B.

    1995-01-01

    Purpose: 1) To determine the variability in treatment setup during a 5 week course of tangential breast treatment. 2) To assess the relationship between the height of the lung shadow at the central axis (Central Lung Distance: CLD) on the tangential port film and the percent of total lung volume included within the tangential fields (to verify the previously reported result from Bornstein, et al, IJROBP 18:181, 90). 3) To determine the impact of the variabilities in treatment setup on the volume of lung that is incidentally included within the radiation fields. Methods: 1) 172 port films of tangential breast/chest wall fields were reviewed from 20 patients who received tangential beam treatment for breast cancer. All patients were immobilized in customized hemibody foam cradles during simulation and treatment. The CLD (height of the lung shadow at the central axis) seen on each of the port films was compared to the corresponding simulator film (correcting for differences in magnification) as an assessment of setup variability. Both inter and intrapatient differences were considered. 2) A three-dimensional dose calculation (reflecting lung density) was performed, and the percent of total lung volume within the field was compared to the CLD. 3) The three-dimensional dose calculation was repeated for selected patients with the location of the treatment beams modified to reflect typical setup variations, in order to assess the impact of this variability on the volume of lung irradiated. Results: 1) The CLD measured on the port films was within 3 mm of that prescribed on the simulator film in 43% ((74(172))) of the port films. The variation was 3-5 mm in 26 %, 5-10 mm in 25 % and > 10 mm in 6 %. The data are shown in Figure 1. 2) There was an excellent correlation found between the height of the lung shadow and the percent of total lung volume seen within the radiation field, (Figure 2), thus verifying the concept previously reported by Bornstein. 3) A 1 cm setup

  17. Tangential stretching rate (TSR) analysis of non premixed reactive flows

    KAUST Repository

    Valorani, Mauro

    2016-10-16

    We discuss how the Tangential stretching rate (TSR) analysis, originally developed and tested for spatially homogeneous systems (batch reactors), is extended to spatially non homogeneous systems. To illustrate the effectiveness of the TSR diagnostics, we study the ignition transient in a non premixed, reaction–diffusion model in the mixture fraction space, whose dependent variables are temperature and mixture composition. The reactive mixture considered is syngas/air. A detailed H2/CO mechanism with 12 species and 33 chemical reactions is employed. We will discuss two cases, one involving only kinetics as a model of front propagation purely driven by spontaneous ignition, the other as a model of deflagration wave involving kinetics/diffusion coupling. We explore different aspects of the system dynamics such as the relative role of diffusion and kinetics, the evolution of kinetic eigenvalues, and of the tangential stretching rates computed by accounting for the combined action of diffusion and kinetics as well for kinetics only. We propose criteria based on the TSR concept which allow to identify the most ignitable conditions and to discriminate between spontaneous ignition and deflagration front.

  18. Absolute Geostrophic Velocity Inverted from the Polar Science Center Hydrographic Climatology (PHC3.0) of the Arctic Ocean with the P-Vector Method (NCEI Accession 0156425)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset (called PHC-V) comprises 3D gridded climatological fields of absolute geostrophic velocity of the Arctic Ocean inverted from the Polar science center...

  19. Cardiac Dose From Tangential Breast Cancer Radiotherapy in the Year 2006

    International Nuclear Information System (INIS)

    Taylor, Carolyn W.; Povall, Julie M.; McGale, Paul; Nisbet, Andrew; Dodwell, David; Smith, Jonathan T.; Darby, Sarah C.

    2008-01-01

    Purpose: To quantify the radiation doses received by the heart and coronary arteries from contemporary tangential breast or chest wall radiotherapy. Methods and Materials: Fifty consecutive patients with left-sided breast cancer and 5 consecutive patients with right-sided breast cancer treated at a large United Kingdom radiotherapy center during the year 2006 were selected. All patients were irradiated with 6- or 8-MV tangential beams to the breast or chest wall. For each dose plan, dose-volume histograms for the heart and left anterior descending (LAD) coronary artery were calculated. For 5 of the left-sided and all 5 right-sided patients, dose-volume histograms for the right and circumflex coronary arteries were also calculated. Detailed spatial assessment of dose to the LAD coronary artery was performed for 3 left-sided patients. Results: For the 50 patients given left-sided irradiation, the average mean (SD) dose was 2.3 (0.7) Gy to the heart and 7.6 (4.5) Gy to the LAD coronary artery, with the distal LAD receiving the highest doses. The right and circumflex coronary arteries received approximately 2 Gy mean dose. Part of the heart received >20 Gy in 22 left-sided patients (44%). For the 5 patients given right-sided irradiation, average mean doses to all cardiac structures were in the range 1.2 to 2 Gy. Conclusions: Heart dose from left-tangential radiotherapy has decreased considerably over the past 40 years, but part of the heart still receives >20 Gy for approximately half of left-sided patients. Cardiac dose for right-sided patients was generally from scattered irradiation alone

  20. Meninges control tangential migration of hem-derived Cajal-Retzius cells via CXCL12/CXCR4 signaling.

    Science.gov (United States)

    Borrell, Víctor; Marín, Oscar

    2006-10-01

    Cajal-Retzius cells are critical in the development of the cerebral cortex, but little is known about the mechanisms controlling their development. Three focal sources of Cajal-Retzius cells have been identified in mice-the cortical hem, the ventral pallium and the septum-from where they migrate tangentially to populate the cortical surface. Using a variety of tissue culture assays and in vivo manipulations, we demonstrate that the tangential migration of cortical hem-derived Cajal-Retzius cells is controlled by the meninges. We show that the meningeal membranes are a necessary and sufficient substrate for the tangential migration of Cajal-Retzius cells. We also show that the chemokine CXCL12 secreted by the meninges enhances the dispersion of Cajal-Retzius cells along the cortical surface, while retaining them within the marginal zone in a CXCR4-dependent manner. Thus, the meningeal membranes are fundamental in the development of Cajal-Retzius cells and, hence, in the normal development of the cerebral cortex.

  1. Long-term variabilities of meridional geostrophic volumn transport in North Pacific Ocean

    Science.gov (United States)

    Zhou, H.; Yuan, D.; Dewar, W. K.

    2016-02-01

    The meridional geostrophic volumn transport (MGVT) by the ocean plays a very important role in the climatic water mass and heat balance because of its large heat capacity which enables the oceans to store the large amount of radiation received in the summer and to release it in winter. Better understanding of the role of the oceans in climate variability is essential to assess the likely range of future climate fluctuations. In the last century the North Pacific Ocean experienced considerable climate variability, especially on decadal time scale. Some studies have shown that the North Pacific Ocean is the origin of North Pacific multidecadal variability (Latif and Barnett, 1994; Barnett et al., 1999). These fluctuations were associated with large anomalies in sea level, temperature, storminess and rainfall, the heat transport and other extremes are changing as well. If the MGVT of the ocean is well-determined, it can be used as a test of the validity of numerical, global climate models. In this paper, we investigate the long-term variability of the MGVT in North Pacific ocean based on 55 years long global ocean heat and salt content data (Levitus et al., 2012). Very clear inter-decadal variations can be seen in tropical , subtropical and subpolar regions of North Pacific Ocean. There are very consistent variations between the MGVT anomalies and the inter-decadal pacific oscillation (IPO) index in the tropical gyre with cold phase of IPO corresponding to negative MGVT anomalies and warm phase corresponding to positive MGVT anomalies. The subtropical gyre shows more complex variations, and the subpolar gyre shows a negative MGVT anomaly before late 1970's and a positive anomaly after that time. The geostrophic velocities of North Pacific Ocean show significantly different anomalies during the two IPO cold phases of 1955-1976 and 1999 to present, which suggests a different mechanism of the two cold phases. The long term variations of Sverdrup transport compares well

  2. Mirror System for Collecting Thomson-Scattered Light in a Tangential Direction

    NARCIS (Netherlands)

    Barth, C. J.; Grobben, B. J. J.; Verhaag, G. C. H. M.

    1994-01-01

    We describe an optical system for collecting Thomson-scattering light in the tangential direction of a tokamak. The key part of the optics is a set of mirrors arranged as a Venetian blind. This system makes it possible to look around the corner of the tokamak vessel. Design considerations and test

  3. Automated Planning of Tangential Breast Intensity-Modulated Radiotherapy Using Heuristic Optimization

    International Nuclear Information System (INIS)

    Purdie, Thomas G.; Dinniwell, Robert E.; Letourneau, Daniel; Hill, Christine; Sharpe, Michael B.

    2011-01-01

    Purpose: To present an automated technique for two-field tangential breast intensity-modulated radiotherapy (IMRT) treatment planning. Method and Materials: A total of 158 planned patients with Stage 0, I, and II breast cancer treated using whole-breast IMRT were retrospectively replanned using automated treatment planning tools. The tools developed are integrated into the existing clinical treatment planning system (Pinnacle 3 ) and are designed to perform the manual volume delineation, beam placement, and IMRT treatment planning steps carried out by the treatment planning radiation therapist. The automated algorithm, using only the radio-opaque markers placed at CT simulation as inputs, optimizes the tangential beam parameters to geometrically minimize the amount of lung and heart treated while covering the whole-breast volume. The IMRT parameters are optimized according to the automatically delineated whole-breast volume. Results: The mean time to generate a complete treatment plan was 6 min, 50 s ± 1 min 12 s. For the automated plans, 157 of 158 plans (99%) were deemed clinically acceptable, and 138 of 158 plans (87%) were deemed clinically improved or equal to the corresponding clinical plan when reviewed in a randomized, double-blinded study by one experienced breast radiation oncologist. In addition, overall the automated plans were dosimetrically equivalent to the clinical plans when scored for target coverage and lung and heart doses. Conclusion: We have developed robust and efficient automated tools for fully inversed planned tangential breast IMRT planning that can be readily integrated into clinical practice. The tools produce clinically acceptable plans using only the common anatomic landmarks from the CT simulation process as an input. We anticipate the tools will improve patient access to high-quality IMRT treatment by simplifying the planning process and will reduce the effort and cost of incorporating more advanced planning into clinical practice.

  4. Tandem collimators for the JET tangential gamma-ray spectrometer

    International Nuclear Information System (INIS)

    Soare, Sorin; Balshaw, Nick; Blanchard, Patrick; Craciunescu, Teddy; Croft, David; Curuia, Marian; Edlington, Trevor; Kiptily, Vasily; Murari, Andrea; Prior, Phil; Sanders, Steven; Syme, Brian; Zoita, Vasile

    2011-01-01

    The tangential gamma-ray spectrometer (TGRS) of the JET tokamak fusion facility is an important diagnostics for investigating the fast particle evolution. A well defined field of view for the TGRS diagnostics is essential for its proper operation and this is to be determined by a rather complex system of collimators and shields both for the neutron and gamma radiations. A conceptual design for this system has been carried out with the main design target set to maximize the signal-to-background ratio at the spectrometer detector, the ratio being defined in terms of the plasma emitted gamma radiation and the gamma-ray background. As a first phase of the TGRS diagnostics upgrade a set of two tandem collimators has been designed with the aim of determining a quasi-tangential field of view through JET tokamak plasmas. A modular design of the tandem system has been developed in order to allow for the construction of different configurations for deuterium and deuterium-tritium discharges. The internal structure of the collimators consists of nuclear grade lead and high density polyethylene slabs arranged in an optimized pattern. The performance of a simplified geometry of the tandem collimator configuration has been evaluated by neutron and photon transport calculations and the numerical results show that the design parameters can be attained.

  5. Spiral Galaxy Central Bulge Tangential Speed of Revolution Curves

    Science.gov (United States)

    Taff, Laurence

    2013-03-01

    The objective was to, for the first time in a century, scientifically analyze the ``rotation curves'' (sic) of the central bulges of scores of spiral galaxies. I commenced with a methodological, rational, geometrical, arithmetic, and statistical examination--none of them carried through before--of the radial velocity data. The requirement for such a thorough treatment is the paucity of data typically available for the central bulge: fewer than 10 observations and frequently only five. The most must be made of these. A consequence of this logical handling is the discovery of a unique model for the central bulge volume mass density resting on the positive slope, linear, rise of its tangential speed of revolution curve and hence--for the first time--a reliable mass estimate. The deduction comes from a known physics-based, mathematically valid, derivation (not assertion). It rests on the full (not partial) equations of motion plus Poisson's equation. Following that is a prediction for the gravitational potential energy and thence the gravitational force. From this comes a forecast for the tangential speed of revolution curve. It was analyzed in a fashion identical to that of the data thereby closing the circle and demonstrating internal self-consistency. This is a hallmark of a scientific method-informed approach to an experimental problem. Multiple plots of the relevant quantities and measures of goodness of fit will be shown. Astronomy related

  6. Meningeal defects alter the tangential migration of cortical interneurons in Foxc1hith/hith mice

    Directory of Open Access Journals (Sweden)

    Zarbalis Konstantinos

    2012-01-01

    Full Text Available Abstract Background Tangential migration presents the primary mode of migration of cortical interneurons translocating into the cerebral cortex from subpallial domains. This migration takes place in multiple streams with the most superficial one located in the cortical marginal zone. While a number of forebrain-expressed molecules regulating this process have emerged, it remains unclear to what extent structures outside the brain, like the forebrain meninges, are involved. Results We studied a unique Foxc1 hypomorph mouse model (Foxc1hith/hith with meningeal defects and impaired tangential migration of cortical interneurons. We identified a territorial correlation between meningeal defects and disruption of interneuron migration along the adjacent marginal zone in these animals, suggesting that impaired meningeal integrity might be the primary cause for the observed migration defects. Moreover, we postulate that the meningeal factor regulating tangential migration that is affected in homozygote mutants is the chemokine Cxcl12. In addition, by using chromatin immunoprecipitation analysis, we provide evidence that the Cxcl12 gene is a direct transcriptional target of Foxc1 in the meninges. Further, we observe migration defects of a lesser degree in Cajal-Retzius cells migrating within the cortical marginal zone, indicating a less important role for Cxcl12 in their migration. Finally, the developmental migration defects observed in Foxc1hith/hith mutants do not lead to obvious differences in interneuron distribution in the adult if compared to control animals. Conclusions Our results suggest a critical role for the forebrain meninges to promote during development the tangential migration of cortical interneurons along the cortical marginal zone and Cxcl12 as the factor responsible for this property.

  7. Shielding of the contralateral breast during tangential irradiation.

    Science.gov (United States)

    Goffman, Thomas E; Miller, Michael; Laronga, Christine; Oliver, Shelly; Wong, Ping

    2004-08-01

    The purpose of this study was to investigate both optimal and practical contralateral breast shielding during tangential irradiation in young patients. A shaped sheet of variable thickness of lead was tested on a phantom with rubber breasts, and an optimized shield was created. Testing on 18 consecutive patients 50 years or younger showed shielding consistently reduced contralateral breast dose to at least half, with small additional reduction after removal of the medial wedge. For younger patients in whom radiation exposure is of considerable concern, a simple shield of 2 mm lead thickness proved practical and effective.

  8. A Single Mode Study of a Quasi-Geostrophic Convection-Driven Dynamo Model

    Science.gov (United States)

    Plumley, M.; Calkins, M. A.; Julien, K. A.; Tobias, S.

    2017-12-01

    Planetary magnetic fields are thought to be the product of hydromagnetic dynamo action. For Earth, this process occurs within the convecting, turbulent and rapidly rotating outer core, where the dynamics are characterized by low Rossby, low magnetic Prandtl and high Rayleigh numbers. Progress in studying dynamos has been limited by current computing capabilities and the difficulties in replicating the extreme values that define this setting. Asymptotic models that embrace these extreme parameter values and enforce the dominant balance of geostrophy provide an option for the study of convective flows with actual relevance to geophysics. The quasi-geostrophic dynamo model (QGDM) is a multiscale, fully-nonlinear Cartesian dynamo model that is valid in the asymptotic limit of low Rossby number. We investigate the QGDM using a simplified class of solutions that consist of a single horizontal wavenumber which enforces a horizontal structure on the solutions. This single mode study is used to explore multiscale time stepping techniques and analyze the influence of the magnetic field on convection.

  9. Rationale and Application of Tangential Scanning to Industrial Inspection of Hardwood Logs

    Science.gov (United States)

    Nand K. Gupta; Daniel L. Schmoldt; Bruce Isaacson

    1998-01-01

    Industrial computed tomography (CT) inspection of hardwood logs has some unique requirements not found in other CT applications. Sawmill operations demand that large volumes of wood be scanned quickly at high spatial resolution for extended duty cycles. Current CT scanning geometries and commercial systems have both technical and economic [imitations. Tangential...

  10. Vastus Medialis advancement: clinical results and correlation with tangential X-rays of the patellofemoral joint

    International Nuclear Information System (INIS)

    O'Beirne, J.; O'Connell, R.J.; White, M.

    1986-01-01

    Thirteen patients who had recurrent dislocation of the patella treated by vastus medialis advancement were reviewed, and tangential X-rays of the patellofemoral joint were taken at the time of review. Clinically the results were excellent or good in ten (77%). However, the X-ray appearances were similar to what would be expected in a group of patients with untreated recurrent dislocation, probably because the corrective action of the vastus medialis did not apply with the quadriceps relaxed for X-ray. We conclude that vastus medialis advancement is a successful operation for recurrent patellar dislocation but that tangential X-rays of the patellofemoral joint are not an indicator of the outcome of surgery. (author)

  11. Radial and tangential gravity rates from GRACE in areas of glacial isostatic adjustment

    Science.gov (United States)

    van der Wal, Wouter; Kurtenbach, Enrico; Kusche, Jürgen; Vermeersen, Bert

    2011-11-01

    In areas dominated by Glacial Isostatic Adjustment (GIA), the free-air gravity anomaly rate can be converted to uplift rate to good approximation by using a simple spectral relation. We provide quantitative comparisons between gravity rates derived from monthly gravity field solutions (GFZ Potsdam, CSR Texas, IGG Bonn) from the Gravity Recovery and Climate Experiment (GRACE) satellite mission with uplift rates measured by GPS in these areas. The band-limited gravity data from the GRACE satellite mission can be brought to very good agreement with the point data from GPS by using scaling factors derived from a GIA model (the root-mean-square of differences is 0.55 mm yr-1 for a maximum uplift rate signal of 10 mm yr-1). The root-mean-square of the differences between GRACE derived uplift rates and GPS derived uplift rates decreases with increasing GRACE time period to a level below the uncertainty that is expected from GRACE observations, GPS measurements and the conversion from gravity rate to uplift rate. With the current length of time-series (more than 8 yr) applying filters and a hydrology correction to the GRACE data does not reduce the root-mean-square of differences significantly. The smallest root-mean-square was obtained with the GFZ solution in Fennoscandia and with the CSR solution in North America. With radial gravity rates in excellent agreement with GPS uplift rates, more information on the GIA process can be extracted from GRACE gravity field solutions in the form of tangential gravity rates, which are equivalent to a rate of change in the deflection of the vertical scaled by the magnitude of gravity rate vector. Tangential gravity rates derived from GRACE point towards the centre of the previously glaciated area, and are largest in a location close to the centre of the former ice sheet. Forward modelling showed that present day tangential gravity rates have maximum sensitivity between the centre and edge of the former ice sheet, while radial gravity

  12. Simulation study on vertically distributed multi-channel tangential interferometry for KSTAR

    International Nuclear Information System (INIS)

    Nam, Y U; Juhn, J W

    2012-01-01

    Interferometry is powerful and reliable diagnostics which measures line-integrated electron density. Since this technique only measures an averaged value over whole probing line, a multi-channel scheme is used for an analysis for spatial distribution and variation of electron density. Typical setups of the multi-channel measurement are schemes of radially distributed vertical lines, vertically distributed horizontal lines and horizontally distributed tangential lines. In Korea Superconducting Tokamak Advanced Research, a vertically distributed multi-channel tangential interferometry is planned instead of above typical schemes due to limitation of complex in-vessel geometry and narrow diagnostics port through cryostat. Total 5-channels will be vertically placed as symmetric with the mid-plain. One of the characteristic features of the vertically distributed channels is that each channel is viewing different poloidal angle, while the horizontally distributed channels are viewing different toroidal angle. This scheme also can be used on an investigation of the up-down asymmetry and the vertical oscillation of plasma. Simulation has been performed and the result will be discussed to verify the possibility and the estimated effectiveness of the scheme on this paper.

  13. Validation and Application of Computed Radiography (CR) Tangential Technique for Wall Thickness Measurement of 10 Inch Carbon Steel Pipe

    International Nuclear Information System (INIS)

    Norhazleena Azaman; Khairul Anuar Mohd Salleh; Amry Amin Abas; Arshad Yassin; Sukhri Ahmad

    2016-01-01

    Oil and gas industry requires Non Destructive Testing (NDT) to ensure each components, in-service and critical, are fit-for-purpose. Pipes that are used to transfer oil or gas are amongst the critical component that needs to be well maintained and inspected. Typical pipe discontinuities that may lead to unintended incidents are erosion, corrosion, dent, welding defects, etc. Wall thickness assessment, with Radiography Testing (RT) is normally used to inspect such discontinuities and can be performed with two approaches; (a) center line beam tangential technique (b) offset from the centre pipe tangential technique. The latter is a method of choice for this work because of the pipe dimension and limited radiation safe distance at site. Two successful validation approaches (simulation and experimental) were performed to determine the probability of successfulness before the actual RT work with tangential technique is carried out. The pipe was a 10 inch diameter in-service wrapped carbon steel. A 9 Ci Ir-192 and white Imaging Plate (IP) were used as a gamma radiation source and to record the radiographic image. Result of this work suggest that RT with tangential technique for 10 inch wrapped in-service carbon steel pipe can be successfully performed. (author)

  14. Investigation of the noise effect on tomographic reconstructions for a tangentially viewing vacuum ultraviolet imaging diagnostic

    International Nuclear Information System (INIS)

    Ming, Tingfeng; Ohdachi, Satoshi; Suzuki, Yasuhiro

    2011-01-01

    Tomographic reconstruction for a tangentially viewing two-dimensional (2D) imaging system is studied. A method to calculate the geometry matrix in 2D tomography is introduced. An algorithm based on a Phillips-Tikhonov (P-T) type regularization method is investigated, and numerical tests using the P-T method are conducted with both tokamak and Heliotron configurations. The numerical tests show that the P-T method is not sensitive to the added noise levels and the emission profiles with higher mode numbers can be reconstructed with adequate resolution. The results indicate that this method is suitable for 2D tomographic reconstruction for a tangentially viewing vacuum ultraviolet telescope system. (author)

  15. Impact of setup variability on incidental lung irradiation during tangential breast treatment

    International Nuclear Information System (INIS)

    Carter, Dennis L.; Marks, Lawrence B.; Bentel, Gunilla C.

    1997-01-01

    Purpose: This study aimed to determine the variability in treatment setup during a 5-week course of tangential breast treatment for patients immobilized in a customized hemibody cradle, to assess the relationship between the height of the lung shadow on the tangential port film and the percentage of lung volume irradiated, and to estimate the impact of setup variabilities on irradiated lung volume. Methods: One hundred seventy-two port films were reviewed from 20 patients who received tangential beam treatment for breast cancer. The height of the lung shadow at the central axis (CLD) on each port film was compared to the corresponding simulator film as an assessment of setup variability. A three-dimensional dose calculation was performed, and the percentage of total lung volume within the field was correlated with the CLD. The three-dimensional dose calculation was repeated for selected patients with the location of the treatment beams modified to reflect typical setup variations. Results: The CLD measured on the port films was within 3 mm of that prescribed on the simulator film in 43% (74 of 172) of the port films. The variation was 3-5 mm in 26%, 5-10 mm in 25%, and >10 mm in 6%. The height of the lung shadow correlated with the percentage of lung volume included in the radiation field (r 2 = 0.6). Typical variations in treatment setup resulted in ≤5% fluctuation in the absolute volume of ipsilateral lung irradiated. Conclusion: The current immobilization system used in our clinic provides a clinically acceptable reproducibility of patient setup. The height of the lung shadow is reasonably well correlated with the percentage of irradiated lung volume. During a typical 5-week course of radiotherapy, the ipsilateral irradiated lung volume fluctuates <5%

  16. Contralateral breast doses measured by film dosimetry: tangential techniques and an optimized IMRT technique

    International Nuclear Information System (INIS)

    Saur, S; Frengen, J; Fjellsboe, L M B; Lindmo, T

    2009-01-01

    The contralateral breast (CLB) doses for three tangential techniques were characterized by using a female thorax phantom and GafChromic EBT film. Dose calculations by the pencil beam and collapsed cone algorithms were included for comparison. The film dosimetry reveals a highly inhomogeneous dose distribution within the CLB, and skin doses due to the medial fields that are several times higher than the interior dose. These phenomena are not correctly reproduced by the calculation algorithms. All tangential techniques were found to give a mean CLB dose of approximately 0.5 Gy. All wedged fields resulted in higher CLB doses than the corresponding open fields, and the lateral open fields resulted in higher CLB doses than the medial open fields. More than a twofold increase in the mean CLB dose from the medial open field was observed for a 90 deg. change of the collimator orientation. Replacing the physical wedge with a virtual wedge reduced the mean dose to the CLB by 35% and 16% for the medial and lateral fields, respectively. Lead shielding reduced the skin dose for a tangential technique by approximately 50%, but the mean CLB dose was only reduced by approximately 11%. Finally, a technique based on open medial fields in combination with several IMRT fields is proposed as a technique for minimizing the CLB dose. With and without lead shielding, the mean CLB dose using this technique was found to be 0.20 and 0.27 Gy, respectively.

  17. Tangential neutral-beam-driven instabilities in the princeton beta experiment

    OpenAIRE

    Heidbrink, WW; Bol, K; Buchenauer, D; Fonck, R; Gammel, G; Ida, K; Kaita, R; Kaye, S; Kugel, H; LeBlanc, B; Morris, W; Okabayashi, M; Powell, E; Sesnic, S; Takahashi, H

    1986-01-01

    During tangential neutral-beam injection into the PBX tokamak, bursts of two types of instabilities are observed. One instability occurs in the frequency range 120-210 kHz and the other oscillates predominantly near the frequency of bulk plasma rotation (20-30 kHz). Both instabilities correlate with drops in neutron emission and bursts in charge-exchange neutral flux, indicating that beam ions are removed from the center of the plasma by the instabilities. The central losses are comparable to...

  18. Assimilation of time-averaged observations in a quasi-geostrophic atmospheric jet model

    Energy Technology Data Exchange (ETDEWEB)

    Huntley, Helga S. [University of Washington, Department of Applied Mathematics, Seattle, WA (United States); University of Delaware, School of Marine Science and Policy, Newark, DE (United States); Hakim, Gregory J. [University of Washington, Department of Atmospheric Sciences, Seattle, WA (United States)

    2010-11-15

    The problem of reconstructing past climates from a sparse network of noisy time-averaged observations is considered with a novel ensemble Kalman filter approach. Results for a sparse network of 100 idealized observations for a quasi-geostrophic model of a jet interacting with a mountain reveal that, for a wide range of observation averaging times, analysis errors are reduced by about 50% relative to the control case without assimilation. Results are robust to changes to observational error, the number of observations, and an imperfect model. Specifically, analysis errors are reduced relative to the control case for observations having errors up to three times the climatological variance for a fixed 100-station network, and for networks consisting of ten or more stations when observational errors are fixed at one-third the climatological variance. In the limit of small numbers of observations, station location becomes critically important, motivating an optimally determined network. A network of fifteen optimally determined observations reduces analysis errors by 30% relative to the control, as compared to 50% for a randomly chosen network of 100 observations. (orig.)

  19. Asymmetry of radial and symmetry of tangential neuronal migration pathways in developing human fetal brains

    Directory of Open Access Journals (Sweden)

    Yuta eMiyazaki

    2016-01-01

    Full Text Available AbstractThe radial and tangential neural migration pathways are two major neuronal migration streams in humans that are critical during corticogenesis. Corticogenesis is a complex process of neuronal proliferation that is followed by neuronal migration and the formation of axonal connections. Existing histological assessments of these two neuronal migration pathways have limitations inherent to microscopic studies and are confined to small anatomic regions of interest. Thus, little evidence is available about their three-dimensional fiber pathways and development throughout the entire brain. In this study, we imaged and analyzed radial and tangential migration pathways in the whole human brain using high-angular resolution diffusion MR imaging (HARDI tractography. We imaged ten fixed, postmortem fetal (17 gestational weeks (GW, 18 GW, 19 GW, three 20 GW, three 21 GW and 22 GW and eight in vivo newborn (two 30 GW, 34 GW, 35 GW and four 40 GW brains with no neurological/pathological conditions. We statistically compared the volume of the left and right radial and tangential migration pathways, and the volume of the radial migration pathways of the anterior and posterior regions of the brain. In specimens 22 GW or younger, the volume of radial migration pathways of the left hemisphere was significantly larger than that of the right hemisphere. The volume of posterior radial migration pathways was also larger when compared to the anterior pathways in specimens 22 GW or younger. In contrast, no significant differences were observed in the radial migration pathways of brains older than 22 GW. Moreover, our study did not identify any significant differences in volumetric laterality in the tangential migration pathways. These results suggest that these two neuronal migration pathways develop and regress differently, and radial neuronal migration varies regionally based on hemispheric and anterior-posterior laterality, potentially explaining regional

  20. Fuelling effect of tangential compact toroid injection in STOR-M Tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Onchi, T.; Liu, Y., E-mail: tao668@mail.usask.ca [Univ. of Saskatchewan, Dept. of Physics and Engineering Physics, Saskatoon, Saskatchewan (Canada); Dreval, M. [Univ. of Saskatchewan, Dept. of Physics and Engineering Physics, Saskatoon, Saskatchewan (Canada); Inst. of Plasma Physics NSC KIPT, Kharkov (Ukraine); McColl, D. [Univ. of Saskatchewan, Dept. of Physics and Engineering Physics, Saskatoon, Saskatchewan (Canada); Asai, T. [Inst. of Plasma Physics NSC KIPT, Kharkov (Ukraine); Wolfe, S. [Nihon Univ., Dept. of Physics, Tokyo (Japan); Xiao, C.; Hirose, A. [Univ. of Saskatchewan, Saskatoon, Saskatchewan (Canada)

    2012-07-01

    Compact torus injection (CTI) is the only known candidate for directly fuelling the core of a tokamak fusion reactor. Compact torus (CT) injection into the STOR-M tokamak has induced improved confinement accompanied by an increase in the electron density, reduction in Hα emission, and suppression of the saw-tooth oscillations. The measured change in the toroidal flow velocity following tangential CTI has demonstrated momentum injection into the STOR-M plasma. (author)

  1. Drag reduction and thrust generation by tangential surface motion in flow past a cylinder

    Science.gov (United States)

    Mao, Xuerui; Pearson, Emily

    2018-03-01

    Sensitivity of drag to tangential surface motion is calculated in flow past a circular cylinder in both two- and three-dimensional conditions at Reynolds number Re ≤ 1000 . The magnitude of the sensitivity maximises in the region slightly upstream of the separation points where the contour lines of spanwise vorticity are normal to the cylinder surface. A control to reduce drag can be obtained by (negatively) scaling the sensitivity. The high correlation of sensitivities of controlled and uncontrolled flow indicates that the scaled sensitivity is a good approximation of the nonlinear optimal control. It is validated through direct numerical simulations that the linear range of the steady control is much higher than the unsteady control, which synchronises the vortex shedding and induces lock-in effects. The steady control injects angular momentum into the separating boundary layer, stabilises the flow and increases the base pressure significantly. At Re=100 , when the maximum tangential motion reaches 50% of the free-stream velocity, the vortex shedding, boundary-layer separation and recirculation bubbles are eliminated and 32% of the drag is reduced. When the maximum tangential motion reaches 2.5 times of the free-stream velocity, thrust is generated and the power savings ratio, defined as the ratio of the reduced drag power to the control input power, reaches 19.6. The mechanism of drag reduction is attributed to the change of the radial gradient of spanwise vorticity (partial r \\hat{ζ } ) and the subsequent accelerated pressure recovery from the uncontrolled separation points to the rear stagnation point.

  2. Radial and tangential friction in heavy ion strongly damped collisions

    International Nuclear Information System (INIS)

    Jain, A.K.; Sarma, N.

    1979-01-01

    Deeply inelastic heavy ion collisions have been successfully described in terms of a nucleon exchange mechanism between two nucleon clouds. This model has also predicted the large angular momentum that is induced in the colliding nuclei. However computations were simplified in the earlier work by assuming that the friction was perturbation on the elastic scattering trajectory. Results of a more rigorous calculation are reported and the effect of modification of the trajectory on the energy transfer, the angular momentum induced and on the ratio of the radial to the tangential friction coefficients is reported. (auth.)

  3. Tangential neutral-beam--driven instabilities in the Princeton beta experiment

    International Nuclear Information System (INIS)

    Heidbrink, W.W.; Bol, K.; Buchenauer, D.

    1986-01-01

    During tangential neutral-beam injection into the PBX tokamak, bursts of two types of instabilities are observed. One instability occurs in the frequency range 120--210 kHz and the other oscillates predominantly near the frequency of bulk plasma rotation (20--30 kHz). Both instabilities correlate with drops in neutron emission and bursts in charge-exchange neutral flux, indicating that beam ions are removed from the center of the plasma by the instabilities. The central losses are comparable to the losses induced by the fishbone instability during perpendicular injection

  4. DEMONSTRATION OF SORBENT INJECTION TECHNOLOGY ON A TANGENTIALLY COAL-FIRED UTILITY BOILER (YORKTOWN LIMB DEMONSTRATION)

    Science.gov (United States)

    The report summarizes activities conducted and results achieved in an EPA-sponsored program to demonstrate Limestone Injection Multistage Burner (LIMB) technology on a tangentially fired coal-burning utility boiler, Virginia Power's 180-MWe Yorktown Unit No. 2. his successfully d...

  5. Ammonia-methane combustion in tangential swirl burners for gas turbine power generation

    OpenAIRE

    Valera Medina, Agustin; Marsh, Richard; Runyon, Jon; Pugh, Daniel; Beasley, Paul; Hughes, Timothy Richard; Bowen, Philip John

    2017-01-01

    Ammonia has been proposed as a potential energy storage medium in the transition towards a low-carbon economy. This paper details experimental results and numerical calculations obtained to progress towards optimisation of fuel injection and fluidic stabilisation in swirl burners with ammonia as the primary fuel. A generic tangential swirl burner has been employed to determine flame stability and emissions produced at different equivalence ratios using ammonia–methane blends. Experiments were...

  6. Coherent structure in geostrophic flow under density stratification; Mippei seisoka ni aru chikoryu no soshiki kozo

    Energy Technology Data Exchange (ETDEWEB)

    Tsujimura, S.; Iida, O.; Nagano, Y. [Nagoya Institute of Technology, Nagoya (Japan)

    1998-10-25

    The coherent structure and relevant heat transport in geostrophic flows under various density stratification has been studied by using both direct numerical simulation and rapid distortion theory. It is found that in a neutrally stratified flow under system rotation, the temperature fluctuations become very close to two-dimensional and their variation is very small in the direction parallel to the axis of rotation. Under the stable stratification, the velocity and temperature fluctuations tend to oscillate with the Brunt-Vaisala frequency. Under the unstable stratification, on the other hand, vortex columns are formed in the direction parallel to the axis of rotation. However, the generation of the elongated vortex columns cannot be predicted by the rapid distortion theory. The non-linear term is required to generate these characteristic vortex columns. 11 refs., 18 figs., 1 tab.

  7. An alternative to FASTSIM for tangential solution of the wheel-rail contact

    Science.gov (United States)

    Sichani, Matin Sh.; Enblom, Roger; Berg, Mats

    2016-06-01

    In most rail vehicle dynamics simulation packages, tangential solution of the wheel-rail contact is gained by means of Kalker's FASTSIM algorithm. While 5-25% error is expected for creep force estimation, the errors of shear stress distribution, needed for wheel-rail damage analysis, may rise above 30% due to the parabolic traction bound. Therefore, a novel algorithm named FaStrip is proposed as an alternative to FASTSIM. It is based on the strip theory which extends the two-dimensional rolling contact solution to three-dimensional contacts. To form FaStrip, the original strip theory is amended to obtain accurate estimations for any contact ellipse size and it is combined by a numerical algorithm to handle spin. The comparison between the two algorithms shows that using FaStrip improves the accuracy of the estimated shear stress distribution and the creep force estimation in all studied cases. In combined lateral creepage and spin cases, for instance, the error in force estimation reduces from 18% to less than 2%. The estimation of the slip velocities in the slip zone, needed for wear analysis, is also studied. Since FaStrip is as fast as FASTSIM, it can be an alternative for tangential solution of the wheel-rail contact in simulation packages.

  8. Shattering world assumptions: A prospective view of the impact of adverse events on world assumptions.

    Science.gov (United States)

    Schuler, Eric R; Boals, Adriel

    2016-05-01

    Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Adult Learning Assumptions

    Science.gov (United States)

    Baskas, Richard S.

    2011-01-01

    The purpose of this study is to examine Knowles' theory of andragogy and his six assumptions of how adults learn while providing evidence to support two of his assumptions based on the theory of andragogy. As no single theory explains how adults learn, it can best be assumed that adults learn through the accumulation of formal and informal…

  10. Quasistatic Seismic Damage Indicators for RC Structures from Dissipating Energies in Tangential Subspaces

    Directory of Open Access Journals (Sweden)

    Wilfried B. Krätzig

    2014-01-01

    Full Text Available This paper applies recent research on structural damage description to earthquake-resistant design concepts. Based on the primary design aim of life safety, this work adopts the necessity of additional protection aims for property, installation, and equipment. This requires the definition of damage indicators, which are able to quantify the arising structural damage. As in present design, it applies nonlinear quasistatic (pushover concepts due to code provisions as simplified dynamic design tools. Substituting so nonlinear time-history analyses, seismic low-cycle fatigue of RC structures is approximated in similar manner. The treatment will be embedded into a finite element environment, and the tangential stiffness matrix KT in tangential subspaces then is identified as the most general entry for structural damage information. Its spectra of eigenvalues λi or natural frequencies ωi of the structure serve to derive damage indicators Di, applicable to quasistatic evaluation of seismic damage. Because det KT=0 denotes structural failure, such damage indicators range from virgin situation Di=0 to failure Di=1 and thus correspond with Fema proposals on performance-based seismic design. Finally, the developed concept is checked by reanalyses of two experimentally investigated RC frames.

  11. Two-dimensional versus three-dimensional treatment planning of tangential breast irradiation

    International Nuclear Information System (INIS)

    Damen, E.M.F.; Bruinvis, I.A.D.; Mijnheer, B.J.

    1995-01-01

    Purpose: Full three-dimensional (3-D) treatment planning requires 3-D patient contours and density information, derived either from CT scanning or from other 3-D contouring methods. These contouring techniques are time consuming, and are often not available or cannot be used. Two-dimensional (2-D) treatment planning can be performed using only a few patient contours, made with much simpler techniques, in combination with simulator images for estimating the lung position. In order to investigate the need for full 3-D planning, we compared the performance of both a 2-D and a 3-D planning system in calculating absolute dose values and relative dose distributions in tangential breast irradiation. Methods: Two breast-shaped phantoms were used in this study. The first phantom consists of a polyethylene mould, filled with water and cork to mimic the lung. An ionization chamber can be inserted in the phantom at fixed positions. The second phantom is made of 25 transverse slices of polystyrene and cork, made with a computerized milling machine from CT information. In this phantom, films can be inserted in three sagittal planes. Both phantoms have been irradiated with two tangential 8 MV photon beams. The measured dose distribution has been compared with the dose distribution predicted by the two planning systems. Results: In the central plane, the 3-D planning system predicts the absolute dose with an accuracy of 0.5 - 4%. The dose at the isocentre of the beams agrees within 0.5% with the measured dose. The 2-D system predicts the dose with an accuracy of 0.9 - 3%. The dose calculated at the isocentre is 2.6% higher than the measured dose, because missing lateral scatter is not taken into account in this planning system. In off-axis planes, the calculated absolute dose agrees with the measured dose within 4% for the 2-D system and within 6% for the 3-D system. However, the relative dose distribution is predicted better by the 3-D planning system. Conclusions: This study

  12. Kinetic equilibrium for an asymmetric tangential layer with rotation of the magnetic field

    Science.gov (United States)

    Belmont, Gérard; Dorville, Nicolas; Aunai, Nicolas; Rezeau, Laurence

    2015-04-01

    Finding kinetic equilibria for tangential current layers is a key issue for modeling plasma phenomena such as magnetic reconnection instabilities, for which theoretical and numerical studies have to start from steady-state current layers. Until 2012, all theoretical models -starting with the most famous "Harris" one- relied on distribution functions built as mono-valued functions of the trajectories invariants. For a coplanar anti-symmetric magnetic field and in absence of electric field, these models were only able to model symmetric variations of the plasma, so precluding any modeling of "magnetopause-like'' layers, which separate two plasmas of different densities and temperatures. Recently, the "BAS" model was presented (Belmont et al., 2012), where multi-valued functions were taken into account. This new tool is made necessary each time the magnetic field reversal occurs on scales larger than the particle Larmor radii, and therefore guaranties a logical transition with the MHD modeling of large scales. The BAS model so provides a new asymmetric equilibrium. It has been validated in a hybrid simulation by Aunai et al (2013), and more recently in a fully kinetic simulation as well. For this original equilibrium to be computed, the magnetic field had to stay coplanar inside the layer. We present here an important generalization, where the magnetic field rotates inside the layer (although restricted to a 180° rotation hitherto). The tangential layers so obtained are thus closer to those encountered at the real magnetopause. This will be necessary, in the future, for comparing directly the theoretical profiles with the experimental ones for the various physical parameters. As it was done previously, the equilibrium is presently tested with a hybrid simulation. Belmont, G.; Aunai, N.; Smets, R., Kinetic equilibrium for an asymmetric tangential layer, Physics of Plasmas, Volume 19, Issue 2, pp. 022108-022118-10, 2012 Aunai, N.; Belmont, G.; Smets, R., First

  13. Stochastic flow modeling : Quasi-Geostrophy, Taylor state and torsional wave excitation

    DEFF Research Database (Denmark)

    Gillet, Nicolas; Jault, D.; Finlay, Chris

    We reconstruct the core flow evolution over the period 1840-2010 under the quasi-geostrophic assumption, from the stochastic magnetic field model COV-OBS and its full model error covariance matrix. We make use of a prior information on the flow temporal power spectrum compatible with that of obse......We reconstruct the core flow evolution over the period 1840-2010 under the quasi-geostrophic assumption, from the stochastic magnetic field model COV-OBS and its full model error covariance matrix. We make use of a prior information on the flow temporal power spectrum compatible....... Large length-scales flow features are naturally dominated by their equatorially symmetric component from about 1900 when the symmetry constraint is relaxed. Equipartition of the kinetic energy in both symmetries coincides with the poor prediction of decadal length-of-day changes in the XIXth century. We...... interpret this as an evidence for quasi-geostrophic rapid flow changes, and the consequence of a too loose data constraint during the oldest period. We manage to retrieve rapid flow changes over the past 60 yrs, and in particular modulated torsional waves predicting correctly interannual length-of day...

  14. Do unreal assumptions pervert behaviour?

    DEFF Research Database (Denmark)

    Petersen, Verner C.

    of the basic assumptions underlying the theories found in economics. Assumptions relating to the primacy of self-interest, to resourceful, evaluative, maximising models of man, to incentive systems and to agency theory. The major part of the paper then discusses how these assumptions and theories may pervert......-interested way nothing will. The purpose of this paper is to take a critical look at some of the assumptions and theories found in economics and discuss their implications for the models and the practices found in the management of business. The expectation is that the unrealistic assumptions of economics have...... become taken for granted and tacitly included into theories and models of management. Guiding business and manage¬ment to behave in a fashion that apparently makes these assumptions become "true". Thus in fact making theories and models become self-fulfilling prophecies. The paper elucidates some...

  15. On the extension of the wind profile over homogeneous terrain beyond the surface boundary layer

    DEFF Research Database (Denmark)

    Gryning, Sven-Erik; Batchvarova, Ekaterina; Brümmer, B.

    2007-01-01

    -Obukhov similarity. Above the surface layer the second length scale (L-MBL ) becomes independent of height but not of stability, and at the top of the boundary layer the third length scale is assumed to be negligible. A simple model for the combined length scale that controls the wind profile and its stability...... dependence is formulated by inverse summation. Based on these assumptions the wind profile for the entire boundary layer is derived. A parameterization of L-MBL is formulated using the geostrophic drag law, which relates friction velocity and geostrophic wind. The empirical parameterization of the resistance...... law functions A and B in the geostrophic drag law is uncertain, making it impractical. Therefore an expression for the length scale, L-MBL , for applied use is suggested, based on measurements from the two sites....

  16. Convergence of Extreme Value Statistics in a Two-Layer Quasi-Geostrophic Atmospheric Model

    Directory of Open Access Journals (Sweden)

    Vera Melinda Gálfi

    2017-01-01

    Full Text Available We search for the signature of universal properties of extreme events, theoretically predicted for Axiom A flows, in a chaotic and high-dimensional dynamical system. We study the convergence of GEV (Generalized Extreme Value and GP (Generalized Pareto shape parameter estimates to the theoretical value, which is expressed in terms of the partial information dimensions of the attractor. We consider a two-layer quasi-geostrophic atmospheric model of the mid-latitudes, adopt two levels of forcing, and analyse the extremes of different types of physical observables (local energy, zonally averaged energy, and globally averaged energy. We find good agreement in the shape parameter estimates with the theory only in the case of more intense forcing, corresponding to a strong chaotic behaviour, for some observables (the local energy at every latitude. Due to the limited (though very large data size and to the presence of serial correlations, it is difficult to obtain robust statistics of extremes in the case of the other observables. In the case of weak forcing, which leads to weaker chaotic conditions with regime behaviour, we find, unsurprisingly, worse agreement with the theory developed for Axiom A flows.

  17. Can simulation measurements be used to predict the irradiated lung volume in the tangential fields in patients treated for breast cancer

    International Nuclear Information System (INIS)

    Bornstein, B.A.; Cheng, C.W.; Rhodes, L.M.; Rashid, H.; Stomper, P.C.; Siddon, R.L.; Harris, J.R.

    1990-01-01

    A simple method of estimating the amount of lung irradiated in patients with breast cancer would be of use in minimizing lung complications. To determine whether simple measurements taken at the time of simulation can be used to predict the lung volume in the radiation field, we performed CT scans as part of treatment planning in 40 cases undergoing radiotherapy for breast cancer. Parameters measured from simulator films included: (a) the perpendicular distance from the posterior tangential field edge to the posterior part of the anterior chest wall at the center of the field (CLD); (b) the maximum perpendicular distance from the posterior tangential field edge to the posterior part of the anterior chest wall (MLD); and (c) the length of lung (L) as measured at the posterior tangential field edge on the simulator film. CT scans of the chest were performed with the patient in the treatment position with 1 cm slice intervals, covering lung apex to base. The ipsilateral total lung area and the lung area included within the treatment port were calculated for each CT scan slice, multiplied by the slice thickness, and then integrated over all CT scan slices to give the volumes. The best predictor of the percent of ipsilateral lung volume treated by the tangential fields was the CLD. Employing linear regression analysis, a coefficient of determination r2 = 0.799 was calculated between CLD and percent treated ipsilateral lung volume on CT scan. In comparison, the coefficients for the other parameters were r2 = 0.784 for the MLD, r2 = 0.071 for L, and r2 = 0.690 for CLD x L. A CLD of 1.5 cm predicted that about 6% of the ipsilateral lung would be included in the tangential field, a CLD of 2.5 cm about 16%, and a CLD of 3.5 cm about 26% of the ipsilateral lung, with a mean 90% prediction interval of +/- 7.1% of ipsilateral lung volume

  18. Tangential Flow Filtration of Colloidal Silver Nanoparticles: A "Green" Laboratory Experiment for Chemistry and Engineering Students

    Science.gov (United States)

    Dorney, Kevin M.; Baker, Joshua D.; Edwards, Michelle L.; Kanel, Sushil R.; O'Malley, Matthew; Pavel Sizemore, Ioana E.

    2014-01-01

    Numerous nanoparticle (NP) fabrication methodologies employ "bottom-up" syntheses, which may result in heterogeneous mixtures of NPs or may require toxic capping agents to reduce NP polydispersity. Tangential flow filtration (TFF) is an alternative "green" technique for the purification, concentration, and size-selection of…

  19. Cascade ultrafiltering of 210Pb and 210Po in freshwater using a tangential flow filtering system

    International Nuclear Information System (INIS)

    Ohtsuka, Y.; Takaku, Y.; Hisamatsu, S.; Inaba, J.; Yamamoto, M.

    2006-01-01

    A rapid method was developed using ultrafilters with a tangential flow filtering system for molecular size separation of naturally occurring 210 Pb and 210 Po in a freshwater sample. Generally, ultrafiltering of a large volume water sample for measuring the nuclides was too time consuming and not practical. The tangential flow filtering system made the filtering time short enough to adapt for in-situ ultrafiltering the large volume sample. In this method, a 20 liter water sample was at first passed through the 0.45 μm pore size membrane filter immediately after sample collection to obtain suspended particle matter [>0.45 μm particulate fraction (PRT)]. Two ultrafilters (Millipore Pellicon 2 R ) were used sequentially. The nuclides in the filtrate were separated into three fractions: high molecular mass (100 kDa-0.45μm; HMM), low molecular mass (10 k-100 kDa; LMM) and ionic ( 210 Pb and 210 Po in an oligotrophic lake, Lake Towada located in the northern area of Japan. (author)

  20. Use of recent geoid models to estimate mean dynamic topography and geostrophic currents in South Atlantic and Brazil Malvinas confluence

    Directory of Open Access Journals (Sweden)

    Alexandre Bernardino Lopes

    2012-03-01

    Full Text Available The use of geoid models to estimate the Mean Dynamic Topography was stimulated with the launching of the GRACE satellite system, since its models present unprecedented precision and space-time resolution. In the present study, besides the DNSC08 mean sea level model, the following geoid models were used with the objective of computing the MDTs: EGM96, EIGEN-5C and EGM2008. In the method adopted, geostrophic currents for the South Atlantic were computed based on the MDTs. In this study it was found that the degree and order of the geoid models affect the determination of TDM and currents directly. The presence of noise in the MDT requires the use of efficient filtering techniques, such as the filter based on Singular Spectrum Analysis, which presents significant advantages in relation to conventional filters. Geostrophic currents resulting from geoid models were compared with the HYCOM hydrodynamic numerical model. In conclusion, results show that MDTs and respective geostrophic currents calculated with EIGEN-5C and EGM2008 models are similar to the results of the numerical model, especially regarding the main large scale features such as boundary currents and the retroflection at the Brazil-Malvinas Confluence.A utilização de modelos geoidais na determinação da Topografia Dinâmica Média foi impulsionada com o lançamento dos satélites do sistema GRACE, já que seus modelos apresentam precisão e resolução espacial e temporal sem precedentes. No presente trabalho, além do modelo de nível médio do mar DNSC08, foram utilizados os seguintes modelos geoidais com o objetivo de calcular as TDMs: EGM96, EIGEN-5C e EGM2008. No método adotado, foram calculadas as respectivas correntes geostróficas para o Atlântico Sul a partir das TDMs. O grau e ordem dos modelos geoidais influenciam diretamente na determinação da TDM e correntes. Neste trabalho verificou-se que presença de ruídos da TDM requer a utilização de técnicas de filtragem

  1. Comparison between intensity modulated radiotherapy (IMRT) and 3D tangential beams technique used in patients with early-stage breast cancer who received breast-conserving therapy

    International Nuclear Information System (INIS)

    Sas-Korczynska, B.; Kokoszka, A.; Korzeniowski, S.; Sladowska, A.; Rozwadowska-Bogusz, B.; Lesiak, J.; Dyczek, S.

    2010-01-01

    Background: The most often found complications in patients with breast cancer who received radiotherapy are cardiac and pulmonary function disorders and development of second malignancies. Aim: To compare the intensity modulated radiotherapy with the 3D tangential beams technique in respect of dose distribution in target volume and critical organs they generate in patients with early-stage breast cancer who received breast-conserving therapy. Materials and methods: A dosimetric analysis was performed to assess the three radiotherapy techniques used in each of 10 consecutive patients with early-stage breast cancer treated with breast-conserving therapy. Radiotherapy was planned with the use of all the three techniques: 3D tangential beams with electron boost, IMRT with electron boost, and intensity modulated radiotherapy with simultaneous integrated boost. Results: The use of the IMRT techniques enables more homogenous dose distribution in target volume. The range of mean and median dose to the heart and lung was lower with the IMRT techniques in comparison to the 3D tangential beams technique. The range of mean dose to the heart amounted to 0.3 - 3.5 Gy for the IMRT techniques and 0.4 - 4.3 for the tangential beams technique. The median dose to the lung on the irradiated side amounted to 4.9 - 5 Gy for the IMRT techniques and 5.6 Gy for the 3D tangential beams technique. Conclusion: The application of the IMRT techniques in radiotherapy patients with early-stage breast cancer allows to obtain more homogenous dose distribution in target volume, while permitting to reduce the dose to critical organs. (authors)

  2. Initial boundary-value problem for the spherically symmetric Einstein equations with fluids with tangential pressure.

    Science.gov (United States)

    Brito, Irene; Mena, Filipe C

    2017-08-01

    We prove that, for a given spherically symmetric fluid distribution with tangential pressure on an initial space-like hypersurface with a time-like boundary, there exists a unique, local in time solution to the Einstein equations in a neighbourhood of the boundary. As an application, we consider a particular elastic fluid interior matched to a vacuum exterior.

  3. Slideline verification for multilayer pressure vessel and piping analysis including tangential motion

    International Nuclear Information System (INIS)

    Van Gulick, L.A.

    1984-01-01

    Nonlinear finite element method (FEM) computer codes with slideline algorithm implementations should be useful for the analysis of prestressed multilayer pressure vessels and piping. This paper presents closed form solutions including the effects of tangential motion useful for verifying slideline implementations for this purpose. The solutions describe stresses and displacements of a long internally pressurized elastic-plastic cylinder initially separated from an elastic outer cylinder by a uniform gap. Comparison of closed form and FEM results evaluates the usefulness of the closed form solution and the validity of the sideline implementation used

  4. Tangential boundary values of Laplace transforms. Applications to Muntz-Szasz type approximation

    Science.gov (United States)

    Sedletskii, A. M.

    2003-02-01

    We consider the Laplace transforms (LT) of functions in L^q(\\mathbb R_+), 1, with a slowly varying weight. We prove that if the weight satisfies certain conditions, then each LT of this class has tangential boundary values almost everywhere on the imaginary axis, and the structure of the corresponding neighbourhoods depends on the weight only. This result is applied to distinguish a wide class of weighted L^p spaces on the half-line such that the Szasz condition is not necessary for the completeness of the system \\exp(-\\lambda_n t) in these spaces.

  5. The estimation of tissue loss during tangential hydrosurgical debridement.

    Science.gov (United States)

    Matsumura, Hajime; Nozaki, Motohiro; Watanabe, Katsueki; Sakurai, Hiroyuki; Kawakami, Shigehiko; Nakazawa, Hiroaki; Matsumura, Izumi; Katahira, Jiro; Inokuchi, Sadaki; Ichioka, Shigeru; Ikeda, Hiroto; Mole, Trevor; Smith, Jennifer; Martin, Robin; Aikawa, Naoki

    2012-11-01

    The preservation of healthy tissue during surgical debridement is desirable as this may improve clinical outcomes. This study has estimated for the first time the amount of tissue lost during debridement using the VERSAJET system of tangential hydrosurgery. A multicenter, prospective case series was carried out on 47 patients with mixed wound types: 21 (45%) burns, 13 (28%) chronic wounds, and 13 (28%) acute wounds. Overall, 44 (94%) of 47 patients achieved appropriate debridement after a single debridement procedure as verified by an independent photographic assessment. The percentage of necrotic tissue reduced from a median of 50% to 0% (P < 0.001). Median wound area and depth increased by only 0.3 cm (6.8%) and 0.5 mm (25%), respectively. Notably, 43 (91%) of 47 wounds did not progress into a deeper compartment, indicating a high degree of tissue preservation.

  6. Status of Far Infrared Tangential Interferometry/Polarimetry (FIReTIP) on NSTX

    International Nuclear Information System (INIS)

    Park, H.K.; Edwards, S.; Guttadora, L.; Deng, B.; Domier, C.W.; Lee, K.C.; Johnson, M.; Luhmann, N.C. Jr.

    2000-01-01

    The Influence of paramagnetism and diamagnetism will significantly alter the vacuum toroidal magnetic field in the spherical torus. Therefore, plasma parameters dependent upon BT such as the q-profile and the local b value need an independent measurement of BT(r,t). The multi-chord Tangential Far Infrared Interferometer/Polarimeter (FIReTIP) system [1] currently under development for the National Spherical Torus Experiment (NSTX) will provide temporally and radially resolved toroidal field profile [BT(r,t)] and 2-D electron density profile [ne(r,t)] data. A two-channel interferometer will be operational this year and the full system will be ready by 2002

  7. Far-infrared tangential interferometer/polarimeter design and installation for NSTX-U

    Energy Technology Data Exchange (ETDEWEB)

    Scott, E. R., E-mail: evrscott@ucdavis.edu [Department of Mechanical and Aerospace Engineering, University of California, Davis, California 95616 (United States); Barchfeld, R. [Department of Applied Science, University of California, Davis, California 95616 (United States); Riemenschneider, P.; Domier, C. W.; Sohrabi, M.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California, Davis, California 95616 (United States); Muscatello, C. M. [General Atomics, San Diego, California 92121 (United States); Kaita, R.; Ren, Y. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08540 (United States)

    2016-11-15

    The Far-infrared Tangential Interferometer/Polarimeter (FIReTIP) system has been refurbished and is being reinstalled on the National Spherical Torus Experiment—Upgrade (NSTX-U) to supply real-time line-integrated core electron density measurements for use in the NSTX-U plasma control system (PCS) to facilitate real-time density feedback control of the NSTX-U plasma. Inclusion of a visible light heterodyne interferometer in the FIReTIP system allows for real-time vibration compensation due to movement of an internally mounted retroreflector and the FIReTIP front-end optics. Real-time signal correction is achieved through use of a National Instruments CompactRIO field-programmable gate array.

  8. Current Profile and Magnetic Structure Measurements through Tangential Soft X-Ray Imaging in Compact Tori

    International Nuclear Information System (INIS)

    Fonck, Raymond J.

    2004-01-01

    This report describes the fabrication and tests of a tangentially imaging soft X-ray (SXR) camera diagnostic for fusion energy plasma research. It can be used for the determination of the current distribution in strongly shaped toroidal magnetically confined plasmas, such as those found in spherical tori or advanced tokamaks. It included the development of both an appropriate imaging SXR camera and image analysis techniques necessary to deduce the plasma shape and current distribution. The basic camera concept consists of a tangentially viewing pinhole imaging system with thin-film SXR filters, a scintillator screen to provide SXR to visible conversion, a fast shuttering system, and an sensitive visible camera imaging device. The analysis approach consists of integrating the 2-D SXR image data into a Grad-Shafranov toroidal equilibrium solver code to provide strong constraints on the deduced plasma current and pressure profiles. Acceptable sensitivity in the deduced current profile can be obtained if the relative noise in the measured image can be kept in the range of 1% or less. Tests on the Pegasus Toroidal Experiment indicate very flat safety factor profiles in the plasma interior

  9. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    Science.gov (United States)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  10. Large Object Irradiation Facility In The Tangential Channel Of The JSI TRIGA Reactor

    CERN Document Server

    Radulovic, Vladimir; Kaiba, Tanja; Kavsek, Darko; Cindro, Vladimir; Mikuz, Marko; Snoj, Luka

    2017-01-01

    This paper presents the design and installation of a new irradiation device in the Tangential Channel of the JSI TRIGA reactor in Ljubljana, Slovenia. The purpose of the device is to enable on-line irradiation testing of electronic components considerably larger in size (of lateral dimensions of at least 12 cm) than currently possible in the irradiation channels located in the reactor core, in a relatively high neutron flux (exceeding 10^12 n cm^-2 s^-1) and to provide adequate neutron and gamma radiation shielding.

  11. Individualized Selection of Beam Angles and Treatment Isocenter in Tangential Breast Intensity Modulated Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Penninkhof, Joan, E-mail: j.penninkhof@erasmusmc.nl [Department of Radiation Oncology, Erasmus M.C. Cancer Institute, Rotterdam (Netherlands); Spadola, Sara [Department of Radiation Oncology, Erasmus M.C. Cancer Institute, Rotterdam (Netherlands); Department of Physics and Astronomy, Alma Mater Studiorum, University of Bologna, Bologna (Italy); Breedveld, Sebastiaan; Baaijens, Margreet [Department of Radiation Oncology, Erasmus M.C. Cancer Institute, Rotterdam (Netherlands); Lanconelli, Nico [Department of Physics and Astronomy, Alma Mater Studiorum, University of Bologna, Bologna (Italy); Heijmen, Ben [Department of Radiation Oncology, Erasmus M.C. Cancer Institute, Rotterdam (Netherlands)

    2017-06-01

    Purpose and Objective: Propose a novel method for individualized selection of beam angles and treatment isocenter in tangential breast intensity modulated radiation therapy (IMRT). Methods and Materials: For each patient, beam and isocenter selection starts with the fully automatic generation of a large database of IMRT plans (up to 847 in this study); each of these plans belongs to a unique combination of isocenter position, lateral beam angle, and medial beam angle. The imposed hard planning constraint on patient maximum dose may result in plans with unacceptable target dose delivery. Such plans are excluded from further analyses. Owing to differences in beam setup, database plans differ in mean doses to organs at risk (OARs). These mean doses are used to construct 2-dimensional graphs, showing relationships between: (1) contralateral breast dose and ipsilateral lung dose; and (2) contralateral breast dose and heart dose (analyzed only for left-sided). The graphs can be used for selection of the isocenter and beam angles with the optimal, patient-specific tradeoffs between the mean OAR doses. For 30 previously treated patients (15 left-sided and 15 right-sided tumors), graphs were generated considering only the clinically applied isocenter with 121 tangential beam angle pairs. For 20 of the 30 patients, 6 alternative isocenters were also investigated. Results: Computation time for automatic generation of 121 IMRT plans took on average 30 minutes. The generated graphs demonstrated large variations in tradeoffs between conflicting OAR objectives, depending on beam angles and patient anatomy. For patients with isocenter optimization, 847 IMRT plans were considered. Adding isocenter position optimization next to beam angle optimization had a small impact on the final plan quality. Conclusion: A method is proposed for individualized selection of beam angles in tangential breast IMRT. This may be especially important for patients with cardiac risk factors or an

  12. Tangential boundary values of Laplace transforms. Applications to Muntz-Szasz type approximation

    Energy Technology Data Exchange (ETDEWEB)

    Sedletskii, A M [M.V. Lomonosov Moscow State University, Moscow (Russian Federation)

    2003-02-28

    We consider the Laplace transforms (LT) of functions in L{sup q}(R{sub +}), 1tangential boundary values almost everywhere on the imaginary axis, and the structure of the corresponding neighbourhoods depends on the weight only. This result is applied to distinguish a wide class of weighted L{sup p} spaces on the half-line such that the Szasz condition is not necessary for the completeness of the system exp(-{lambda}{sub n}t) in these spaces.

  13. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption...

  14. A tangentially viewing VUV TV system for the DIII-D divertor

    International Nuclear Information System (INIS)

    Nilson, D.G.; Ellis, R.; Fenstermacher, M.E.; Brewis, G.; Jalufka, N.

    1998-07-01

    A video camera system capable of imaging VUV emission in the 120--160 nm wavelength range, from the entire divertor region in the DIII-D tokamak, was designed. The new system has a tangential view of the divertor similar to an existing tangential camera system which has produced two dimensional maps of visible line emission (400--800 nm) from deuterium and carbon in the divertor region. However, the overwhelming fraction of the power radiated by these elements is emitted by resonance transitions in the ultraviolet, namely the C IV line at 155.0 nm and Ly-α line at 121.6 nm. To image the ultraviolet light with an angular view including the inner wall and outer bias ring in DIII-D, a 6-element optical system (f/8.9) was designed using a combination of reflective and refractive optics. This system will provide a spatial resolution of 1.2 cm in the object plane. An intermediate UV image formed in a secondary vacuum is converted to the visible by means of a phosphor plate and detected with a conventional CID camera (30 ms framing rate). A single MgF 2 lens serves as the vacuum interface between the primary and secondary vacuums; a second lens must be inserted in the secondary vacuum to correct the focus at 155 nm. Using the same tomographic inversion method employed for the visible TV, they reconstruct the poloidal distribution of the UV divertor light. The grain size of the phosphor plate and the optical system aberrations limit the best focus spot size to 60 microm at the CID plane. The optical system is designed to withstand 350 C vessel bakeout, 2 T magnetic fields, and disruption-induced accelerations of the vessel

  15. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Radial force distribution changes associated with tangential force production in cylindrical grasping, and the importance of anatomical registration.

    Science.gov (United States)

    Pataky, Todd C; Slota, Gregory P; Latash, Mark L; Zatsiorsky, Vladimir M

    2012-01-10

    Radial force (F(r)) distributions describe grip force coordination about a cylindrical object. Recent studies have employed only explicit F(r) tasks, and have not normalized for anatomical variance when considering F(r) distributions. The goals of the present study were (i) to explore F(r) during tangential force production tasks, and (ii) to examine the extent to which anatomical registration (i.e. spatial normalization of anatomically analogous structures) could improve signal detectability in F(r) data. Twelve subjects grasped a vertically oriented cylindrical handle (diameter=6 cm) and matched target upward tangential forces of 10, 20, and 30 N. F(r) data were measured using a flexible pressure mat with an angular resolution of 4.8°, and were registered using piecewise-linear interpolation between five manually identified points-of-interest. Results indicate that F(r) was primarily limited to three contact regions: the distal thumb, the distal fingers, and the fingers' metatacarpal heads, and that, while increases in tangential force caused significant increases in F(r) for these regions, they did not significantly affect the F(r) distribution across the hand. Registration was found to substantially reduce between-subject variability, as indicated by both accentuated F(r) trends, and amplification of the test statistic. These results imply that, while subjects focus F(r) primarily on three anatomical regions during cylindrical grasp, inter-subject anatomical differences introduce a variability that, if not corrected for via registration, may compromise one's ability to draw anatomically relevant conclusions from grasping force data. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Use of a tangential filtration unit for processing liquid waste from nuclear laundries

    International Nuclear Information System (INIS)

    Augustin, X.; Buzonniere, A. de; Barnier, H.

    1993-01-01

    Nuclear laundries produce large quantities of weakly contaminated effluents charged with insoluble and soluble products. In collaboration with CEA, TECHNICATOME has developed an ultrafiltration process for liquid waste from nuclear laundries, associated with prior in-solubilization of the radiochemical activity. This process 'seeded ultrafiltration' is based on the use of decloggable mineral filter media and combines very high separation efficiency with long membrane life. The efficiency of the tangential filtration unit which has been processing effluents from the Cadarache Nuclear Research Center (CEA-France) nuclear laundry since mid-1988, has been confirmed on several sites

  18. The fall of the Northern Unicorn: Tangential motions in the Galactic Anti-centre with SDSS and Gaia

    OpenAIRE

    de Boer, T. J. L.; Belokurov, V.; Koposov, S. E.

    2017-01-01

    We present the first detailed study of the behaviour of the stellar proper motion across the entire Galactic Anti-centre area visible in the Sloan Digital Sky Survey data. We use recalibrated SDSS astrometry in combination with positions from {\\it Gaia} DR1 to provide tangential motion measurements with a systematic uncertainty $

  19. Tangential vitreous traction: a possible mechanism of development of cystoid macular edema in retinitis pigmentosa

    Directory of Open Access Journals (Sweden)

    Mikiko Takezawa

    2011-02-01

    Full Text Available Mikiko Takezawa, Soichi Tetsuka, Akihiro KakehashiDepartment of Ophthalmology, Jichi Medical University, Saitama Medical Center, Saitama, Saitama, JapanAbstract: We report the possible mechanism of development of cystoid macular edema (CME in retinitis pigmentosa (RP in the case of a 68-year-old woman with RP and CME in the right eye and resolving CME in the left eye. Spectral domain optical coherence tomography showed CME and posterior vitreoschisis in the nasal quadrant of the fundus without a posterior vitreous detachment (PVD. This vitreous pathology suggested bilateral thickening and shrinkage of the posterior vitreous cortex. In the right eye, CME was evident with no vitreofoveal separation. However, in the left eye, minimal change was seen in the CME associated with a focal shallow PVD over the fovea. The best-corrected visual acuity (BCVA in the left eye increased to 0.3 from 0.15 7 years after the first visit. Tangential vitreous traction on the macula may have caused the CME in the right eye. The shallow PVD over the fovea might have released the tangential vitreous traction from the fovea, induced spontaneous resolution of the CME, and improved the BCVA in the left eye.Keywords: retinitis pigmentosa, cystoid macular edema, posterior vitreous detachment, posterior vitreoschisis, optical coherence tomography

  20. PIV measurements of the turbulence integral length scale on cold combustion flow field of tangential firing boiler

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wen-fei; Xie, Jing-xing; Gong, Zhi-jun; Li, Bao-wei [Inner Mongolia Univ. of Science and Technology, Baotou (China). Inner Mongolia Key Lab. for Utilization of Bayan Obo Multi-Metallic Resources: Elected State Key Lab.

    2013-07-01

    The process of the pulverized coal combustion in tangential firing boiler has prominent significance on improving boiler operation efficiency and reducing NO{sub X} emission. This paper aims at researching complex turbulent vortex coherent structure formed by the four corners jets in the burner zone, a cold experimental model of tangential firing boiler has been built. And by employing spatial correlation analysis method and PIV (Particle Image Velocimetry) technique, the law of Vortex scale distribution on the three typical horizontal layers of the model based on the turbulent Integral Length Scale (ILS) has been researched. According to the correlation analysis of ILS and the temporal average velocity, it can be seen that the turbulent vortex scale distribution in the burner zone of the model is affected by both jet velocity and the position of wind layers, and is not linear with the variation of jet velocity. The vortex scale distribution of the upper primary air is significantly different from the others. Therefore, studying the ILS of turbulent vortex integral scale is instructive to high efficiency cleaning combustion of pulverized coal in theory.

  1. Subcritical convection of liquid metals in a rotating sphere using a quasi-geostrophic model

    Science.gov (United States)

    Guervilly, Céline; Cardin, Philippe

    2016-12-01

    We study nonlinear convection in a rapidly rotating sphere with internal heating for values of the Prandtl number relevant for liquid metals ($Pr\\in[10^{-2},10^{-1}]$). We use a numerical model based on the quasi-geostrophic approximation, in which variations of the axial vorticity along the rotation axis are neglected, whereas the temperature field is fully three-dimensional. We identify two separate branches of convection close to onset: (i) a well-known weak branch for Ekman numbers greater than $10^{-6}$, which is continuous at the onset (supercritical bifurcation) and consists of thermal Rossby waves, and (ii) a novel strong branch at lower Ekman numbers, which is discontinuous at the onset. The strong branch becomes subcritical for Ekman numbers of the order of $10^{-8}$. On the strong branch, the Reynolds number of the flow is greater than $10^3$, and a strong zonal flow with multiple jets develops, even close to the nonlinear onset of convection. We find that the subcriticality is amplified by decreasing the Prandtl number. The two branches can co-exist for intermediate Ekman numbers, leading to hysteresis ($Ek=10^{-6}$, $Pr=10^{-2}$). Nonlinear oscillations are observed near the onset of convection for $Ek=10^{-7}$ and $Pr=10^{-1}$.

  2. Analysis of interfraction and intrafraction variation during tangential breast irradiation with an electronic portal imaging device

    International Nuclear Information System (INIS)

    Smith, Ryan P.; Bloch, Peter; Harris, Eleanor E.; McDonough, James; Sarkar, Abhirup; Kassaee, Alireza; Avery, Steven; Solin, Lawrence J.

    2005-01-01

    Purpose: To evaluate the daily setup variation and the anatomic movement of the heart and lungs during breast irradiation with tangential photon beams, as measured with an electronic portal imaging device. Methods and materials: Analysis of 1,709 portal images determined changes in the radiation field during a treatment course in 8 patients. Values obtained for every image included central lung distance (CLD) and area of lung and heart within the irradiated field. The data from these measurements were used to evaluate variation from setup between treatment days and motion due to respiration and/or patient movement during treatment delivery. Results: The effect of respiratory motion and movement during treatment was minimal: the maximum range in CLD for any patient on any day was 0.25 cm. The variation caused by day-to-day setup variation was greater, with CLD values for patients ranging from 0.59 cm to 2.94 cm. Similar findings were found for heart and lung areas. Conclusions: There is very little change in CLD and corresponding lung and heart area during individual radiation treatment fractions in breast tangential fields, compared with a relatively greater amount of variation that occurs between days

  3. Multiverse Assumptions and Philosophy

    Directory of Open Access Journals (Sweden)

    James R. Johnson

    2018-02-01

    Full Text Available Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong topics such as: infinity, duplicate yous, hypothetical fields, more than three space dimensions, Hilbert space, advanced civilizations, and reality established by mathematical relationships. It is easy to confuse multiverse proposals because many divergent models exist. This overview defines the characteristics of eleven popular multiverse proposals. The characteristics compared are: initial conditions, values of constants, laws of nature, number of space dimensions, number of universes, and fine tuning explanations. Future scientific experiments may validate selected assumptions; but until they do, proposals by philosophers may be as valid as theoretical scientific theories.

  4. Measurements of beam-ion confinement during tangential beam-driven instabilities in PBX [Princeton Beta Experiment

    International Nuclear Information System (INIS)

    Heidbrink, W.W.; Kaita, R.; Takahashi, H.; Gammel, G.; Hammett, G.W.; Kaye, S.

    1987-01-01

    During tangential injection of neutral beams into low density tokamak plasmas with β > 1% in the Princeton Beta Experiment (PBX), instabilities are observed that degrade the confinement of beam ions. Neutron, charge-exchange, and diamagnetic loop measurements are examined in order to identify the mechanism or mechanisms responsible for the beam-ion transport. The data suggest a resonant interaction between the instabilities and the parallel energetic beam ions. Evidence for some nonresonant transport also exists

  5. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  6. Monitoring Assumptions in Assume-Guarantee Contracts

    Directory of Open Access Journals (Sweden)

    Oleg Sokolsky

    2016-05-01

    Full Text Available Pre-deployment verification of software components with respect to behavioral specifications in the assume-guarantee form does not, in general, guarantee absence of errors at run time. This is because assumptions about the environment cannot be discharged until the environment is fixed. An intuitive approach is to complement pre-deployment verification of guarantees, up to the assumptions, with post-deployment monitoring of environment behavior to check that the assumptions are satisfied at run time. Such a monitor is typically implemented by instrumenting the application code of the component. An additional challenge for the monitoring step is that environment behaviors are typically obtained through an I/O library, which may alter the component's view of the input format. This transformation requires us to introduce a second pre-deployment verification step to ensure that alarms raised by the monitor would indeed correspond to violations of the environment assumptions. In this paper, we describe an approach for constructing monitors and verifying them against the component assumption. We also discuss limitations of instrumentation-based monitoring and potential ways to overcome it.

  7. Wind profile modelling using WAsP and "tall" wind measurements

    DEFF Research Database (Denmark)

    Floors, Rogier Ralph; Kelly, Mark C.; Troen, Ib

    2015-01-01

    extrapolations (the wind profile) this is done using the Weibull distribution and the geostrophic drag law. Wind lidar measurements obtained during the ’Tall wind’ campaign at three different sites are used to evaluate the assumptions and equations that are used in the WAsP vertical extrapolation strategy...

  8. Contact problem on indentation of an elastic half-plane with an inhomogeneous coating by a flat punch in the presence of tangential stresses on a surface

    Science.gov (United States)

    Volkov, Sergei S.; Vasiliev, Andrey S.; Aizikovich, Sergei M.; Sadyrin, Evgeniy V.

    2018-05-01

    Indentation of an elastic half-space with functionally graded coating by a rigid flat punch is studied. The half-plane is additionally subjected to distributed tangential stresses. Tangential stresses are represented in a form of Fourier series. The problem is reduced to the solution of two dual integral equations over even and odd functions describing distribution of unknown normal contact stresses. The solutions of these dual integral equations are constructed by the bilateral asymptotic method. Approximated analytical expressions for contact normal stresses are provided.

  9. Maximum entropy state of the quasi-geostrophic bi-disperse point vortex system: bifurcation phenomena under periodic boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Funakoshi, Satoshi; Sato, Tomoyoshi; Miyazaki, Takeshi, E-mail: funakosi@miyazaki.mce.uec.ac.jp, E-mail: miyazaki@mce.uec.ac.jp [Department of Mechanical Engineering and Intelligent Systems, University of Electro-Communications, 1-5-1, Chofugaoka, Chofu, Tokyo 182-8585 (Japan)

    2012-06-01

    We investigate the statistical mechanics of quasi-geostrophic point vortices of mixed sign (bi-disperse system) numerically and theoretically. Direct numerical simulations under periodic boundary conditions are performed using a fast special-purpose computer for molecular dynamics (GRAPE-DR). Clustering of point vortices of like sign is observed and two-dimensional (2D) equilibrium states are formed. It is shown that they are the solutions of the 2D mean-field equation, i.e. the sinh-Poisson equation. The sinh-Poisson equation is generalized to study the 3D nature of the equilibrium states, and a new mean-field equation with the 3D Laplace operator is derived based on the maximum entropy theory. 3D solutions are obtained at very low energy level. These solution branches, however, cannot be traced up to the higher energy level at which the direct numerical simulations are performed, and transitions to 2D solution branches take place when the energy is increased. (paper)

  10. How Symmetrical Assumptions Advance Strategic Management Research

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul; Hallberg, Hallberg

    2014-01-01

    We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other...... application domains of the theory. We argue that assumptional symmetry leads to theoretical advancement by promoting the development of theory with greater falsifiability and stronger ontological grounding. Thus, strategic management theory may be advanced by systematically searching for asymmetrical...

  11. Tangential filtration technologies membrane and applications for the industry agribusiness

    International Nuclear Information System (INIS)

    Leone, Gian Paolo; Russo, Claudio

    2015-01-01

    The membrane tangential filtration technologies are separation techniques based on the use of semipermeable filters through which, under a pushing force, it is possible to achieve separation of components or suspended in solution as a function of their dimensional characteristics and / or chemical-physical. At the laboratories of the ENEA Research Center Casaccia, as part of the program activities of the Biotechnology and agro-industry division, were studied and developed various filtration processes to membrane in the food industry. The problems have been studied by following a vision sustainable overall, always trying to pair the purification treatment to that of recovery and reuse of water and high value-added components. Ultimate goal of the research conducted is to close the production circuit, ensuring a discharge cycle zero and turning in fact a so-called spread in first, from which to obtain new products. [it

  12. The Field Radiated by a Ring Quasi-Array of an Infinite Number of Tangential or Radial Dipoles

    DEFF Research Database (Denmark)

    Knudsen, H. L.

    1953-01-01

    A homogeneous ring array of axial dipoles will radiate a vertically polarized field that concentrates to an increasing degree around the horizontal plane with increasing increment of the current phase per revolution. There is reason to believe that by using a corresponding antenna system with tan......A homogeneous ring array of axial dipoles will radiate a vertically polarized field that concentrates to an increasing degree around the horizontal plane with increasing increment of the current phase per revolution. There is reason to believe that by using a corresponding antenna system...... with tangential or radial dipoles, a field may be obtained that has a similar useful structure as the above-mentioned ring array, but which in contrast to the latter is essentially horizontally polarized. In this paper a systematic investigation has been made of the field from such an antenna system...... with tangential or radial dipoles. Recently it was stated in the literature that it is impossible to treat the general case where the increase of the current phase per revolution is arbitrarily large by using ordinary functions. The results obtained in this paper disprove this statement. A similar investigation...

  13. Generation of a rotating liquid liner by tangential injection

    International Nuclear Information System (INIS)

    Burton, R.L.; Turchi, P.J.; Jenkins, D.J.; Lanham, R.E.; Cameron, J.; Cooper, A.L.

    1979-01-01

    Efficient compression of low mass-density payloads by the implosion of higher mass-density liquid cylinders or liners, as in the NRL LINUS concept for controlled thermonuclear fusion, requires rotation of the liner material to avoid Rayleigh--Taylor instabilities at the liner-payload interface. Experimentally, such implosions have been demonstrated with liners formed within rotating implosion chambers. The present work uses a scale-model experimental apparatus to investigate the possibility of creating liner rotation by tangential injection of the liquid liner material. Different modes of behavior are obtained depending on the fluid exhaust procedures. Right-circular, cylindrical free surfaces are achieved with axial exhaust of fluid at radii interior to the injection nozzles, for which the liner exhibits a combination of solid-body and free vortex flows in different regions. Measurements allow estimates of power losses to viscous shear, turbulence, etc. A simple model based on open-channel flow is then derived, which is in good agreement with experiment, and is used to extrapolate results to the scale of a possible LINUS fusion reactor

  14. Subcritical thermal convection of liquid metals in a rotating sphere using a quasi-geostrophic model

    Science.gov (United States)

    Cardin, P.; Guervilly, C.

    2016-12-01

    We study non-linear convection in a rapidly rotating sphere with internal heating for values of the Prandtl number relevant for liquid metals (10-2-1). We use a numerical model based on the quasi-geostrophic approximation, in which variations of the axial vorticity along the rotation axis are neglected, whereas the temperature field is fully three-dimensional. We identify two separate branches of convection close to onset: (i) a well-known weak branch for Ekman numbers greater than 10-6, which is continuous at the onset (supercritical bifurcation) and consists of the interaction of thermal Rossby waves, and (ii) a novel strong branch at lower Ekman numbers, which is discontinuous at the onset. The strong branch becomes subcritical for Ekman numbers of the order of 10-8. On the strong branch, the Reynolds number of the flow is greater than 1000, and a strong zonal flow with multiple jets develops, even close to the non-linear onset of convection. We find that the subcriticality is amplified by decreasing the Prandtl number. The two branches can co-exist for intermediate Ekman numbers, leading to hysteresis (E = 10-6, Pr =10-2). Non-linear oscillations are observed near the onset of convection for E = 10-7 and Pr = 10-1.

  15. Wrong assumptions in the financial crisis

    NARCIS (Netherlands)

    Aalbers, M.B.

    2009-01-01

    Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to

  16. PFP issues/assumptions development and management planning guide

    International Nuclear Information System (INIS)

    SINCLAIR, J.C.

    1999-01-01

    The PFP Issues/Assumptions Development and Management Planning Guide presents the strategy and process used for the identification, allocation, and maintenance of an Issues/Assumptions Management List for the Plutonium Finishing Plant (PFP) integrated project baseline. Revisions to this document will include, as attachments, the most recent version of the Issues/Assumptions Management List, both open and current issues/assumptions (Appendix A), and closed or historical issues/assumptions (Appendix B). This document is intended be a Project-owned management tool. As such, this document will periodically require revisions resulting from improvements of the information, processes, and techniques as now described. Revisions that suggest improved processes will only require PFP management approval

  17. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, R.S.; Alonso, D.; McKane, A.J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the

  18. The anisotropy of fluorescence in ring units III: Tangential versus radial dipole arrangement

    Energy Technology Data Exchange (ETDEWEB)

    Herman, P. [Department of Physics, Faculty of Education, University of Hradec Kralove (Czech Republic); Zapletal, D. [Department of Physics, Faculty of Education, University of Hradec Kralove (Czech Republic); Department of Mathematics, University of Pardubice (Czech Republic)], E-mail: david.zapletal@upce.cz; Barvik, I. [Institute of Physics of Charles University, Faculty of Mathematics and Physics, Prague (Czech Republic)

    2008-05-15

    The time dependence of the anisotropy of fluorescence can indicate the coherent exciton transfer regime in molecular rings. We are comparing time development of this quantity after an impulsive excitation obtained for the ring models of bacterial antenna complexes with tangential and radial optical transition dipole moments arrangement as in nonameric LH2 and octameric LH4 units. We use non-correlated static Gaussian disorder in the local exciton energies. We take into account simultaneously dynamic disorder using a Markovian treatment of the interaction with the bath. We show that the influence of dynamic disorder on difference of the anisotropy of fluorescence is more important then the influence of static disorder in consequence of different band width.

  19. The anisotropy of fluorescence in ring units III: Tangential versus radial dipole arrangement

    International Nuclear Information System (INIS)

    Herman, P.; Zapletal, D.; Barvik, I.

    2008-01-01

    The time dependence of the anisotropy of fluorescence can indicate the coherent exciton transfer regime in molecular rings. We are comparing time development of this quantity after an impulsive excitation obtained for the ring models of bacterial antenna complexes with tangential and radial optical transition dipole moments arrangement as in nonameric LH2 and octameric LH4 units. We use non-correlated static Gaussian disorder in the local exciton energies. We take into account simultaneously dynamic disorder using a Markovian treatment of the interaction with the bath. We show that the influence of dynamic disorder on difference of the anisotropy of fluorescence is more important then the influence of static disorder in consequence of different band width

  20. Tangential breast irradiation - rationale and methods for improving dosimetry

    International Nuclear Information System (INIS)

    Neal, A.J.; Mayles, W.P.M.; Yarnold, J.R.

    1994-01-01

    In recent years there have been great advances and innovations in all technical aspects of radiotherapy, including three dimensional (3D) computer planning, patient immobilization, radiation delivery and treatment verification. Despite this progress, the technique of tangential breast irradiation has changed little over this period and has not exploited these advances. There is increasing evidence that dose inhomogeneity within the breast is greater than at other anatomical sites, especially in women with large breasts. This paper is a review of the factors contributing to poor dosimetry in the breast, the clinical consequences of an inhomogeneous dose distribution, and how breast dosimetry could be improved by considering each of the stages from planning to accurate treatment delivery. It also highlights the particular problem of women with large breasts who may be more likely to have a poorer outcome after a fractionated course of radiotherapy than women with small/medium-sized breasts, and supports the clinical impression that such women are also more likely to have greater inhomogenicity when 3D treatment plans are examined. Preliminary data from our current computed tomography (CT) planning study are presented to support these observations. (author)

  1. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  2. Preliminary design of a tangentially viewing imaging bolometer for NSTX-U

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, B. J., E-mail: peterson@LHD.nifs.ac.jp; Mukai, K. [National Institute for Fusion Science, Toki 509-5292 (Japan); SOKENDAI (The Graduate University for Advance Studies), Toki 509-5292 (Japan); Sano, R. [National Institutes for Quantum and Radiological Science and Technology, Naka, Ibaraki 311-0193 (Japan); Reinke, M. L.; Canik, J. M.; Lore, J. D.; Gray, T. K. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Delgado-Aparicio, L. F.; Jaworski, M. A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Eden, G. G. van [FOM Institute DIFFER, 5612 AJ Eindhoven (Netherlands)

    2016-11-15

    The infrared imaging video bolometer (IRVB) measures plasma radiated power images using a thin metal foil. Two different designs with a tangential view of NSTX-U are made assuming a 640 × 480 (1280 × 1024) pixel, 30 (105) fps, 50 (20) mK, IR camera imaging the 9 cm × 9 cm × 2 μm Pt foil. The foil is divided into 40 × 40 (64 × 64) IRVB channels. This gives a spatial resolution of 3.4 (2.2) cm on the machine mid-plane. The noise equivalent power density of the IRVB is given as 113 (46) μW/cm{sup 2} for a time resolution of 33 (20) ms. Synthetic images derived from Scrape Off Layer Plasma Simulation data using the IRVB geometry show peak signal levels ranging from ∼0.8 to ∼80 (∼0.36 to ∼26) mW/cm{sup 2}.

  3. Integrated axial and tangential serpentine cooling circuit in a turbine airfoil

    Science.gov (United States)

    Lee, Ching-Pang; Jiang, Nan; Marra, John J; Rudolph, Ronald J; Dalton, John P

    2015-05-05

    A continuous serpentine cooling circuit forming a progression of radial passages (44, 45, 46, 47A, 48A) between pressure and suction side walls (52, 54) in a MID region of a turbine airfoil (24). The circuit progresses first axially, then tangentially, ending in a last radial passage (48A) adjacent to the suction side (54) and not adjacent to the pressure side (52). The passages of the axial progression (44, 45, 46) may be adjacent to both the pressure and suction side walls of the airfoil. The next to last radial passage (47A) may be adjacent to the pressure side wall and not adjacent to the suction side wall. The last two radial passages (47A, 48A) may be longer along the pressure and suction side walls respectively than they are in a width direction, providing increased direct cooling surface area on the interiors of these hot walls.

  4. Optimizing of verification photographs by using the so-called tangential field technique

    International Nuclear Information System (INIS)

    Proske, H.; Merte, H.; Kratz, H.

    1991-01-01

    When irradiating under high voltage condition, verification photographs prove to be difficult to take if the Gantry position is not aligned to 0deg respectively 180deg, since the patient is being irradiated diagonally. Under these conditions it is extremely difficult to align the X-ray-cartridge vertically to the central beam of the therapeutic radiation. This results in, amongst others, misprojections, so that definite dimensions of portrayed organ structures become practical impossible to determine. This paper describes how we have solved these problems on our high voltage units (tele-gamma cobalt unit and linear-accelerator). By using simple accessories, determination of dimensions of organ structures, as shown on the verification photographs, are made possible. We illustrate our method by using the so-called tangential fields technique when irradiating mamma carcinoma. (orig.) [de

  5. A Modified EPA Method 1623 that Uses Tangential Flow Hollow-Fiber Ultrafiltration and Heat Dissociation Steps to Detect Waterborne Cryptosporidum and Giardia spp.

    Science.gov (United States)

    This protocol describes the use of a tangential flow hollow-fiber ultrafiltration sample concentration system and a heat dissociation as alternative steps for the detection of waterborne Cryptosporidium and Giardia species using EPA Method 1623.

  6. Geostrophic tripolar vortices in a two-layer fluid: Linear stability and nonlinear evolution of equilibria

    Science.gov (United States)

    Reinaud, J. N.; Sokolovskiy, M. A.; Carton, X.

    2017-03-01

    We investigate equilibrium solutions for tripolar vortices in a two-layer quasi-geostrophic flow. Two of the vortices are like-signed and lie in one layer. An opposite-signed vortex lies in the other layer. The families of equilibria can be spanned by the distance (called separation) between the two like-signed vortices. Two equilibrium configurations are possible when the opposite-signed vortex lies between the two other vortices. In the first configuration (called ordinary roundabout), the opposite signed vortex is equidistant to the two other vortices. In the second configuration (eccentric roundabouts), the distances are unequal. We determine the equilibria numerically and describe their characteristics for various internal deformation radii. The two branches of equilibria can co-exist and intersect for small deformation radii. Then, the eccentric roundabouts are stable while unstable ordinary roundabouts can be found. Indeed, ordinary roundabouts exist at smaller separations than eccentric roundabouts do, thus inducing stronger vortex interactions. However, for larger deformation radii, eccentric roundabouts can also be unstable. Then, the two branches of equilibria do not cross. The branch of eccentric roundabouts only exists for large separations. Near the end of the branch of eccentric roundabouts (at the smallest separation), one of the like-signed vortices exhibits a sharp inner corner where instabilities can be triggered. Finally, we investigate the nonlinear evolution of a few selected cases of tripoles.

  7. Statistical prediction of Late Miocene climate

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A; Gupta, S.M.

    by making certain simplifying assumptions; for example in modelling ocean 4 currents, the geostrophic approximation is made. In case of statistical prediction no such a priori assumption need be made. statistical prediction comprises of using observed data... the number of equations. In this case the equations are overdetermined, and therefore one has to look for a solution that best fits the sample data in a least squares sense. To this end we express the sample .... (2.1)+ ry = y + data as follows: n L c. (x...

  8. The relevance of ''theory rich'' bridge assumptions

    NARCIS (Netherlands)

    Lindenberg, S

    1996-01-01

    Actor models are increasingly being used as a form of theory building in sociology because they can better represent the caul mechanisms that connect macro variables. However, actor models need additional assumptions, especially so-called bridge assumptions, for filling in the relatively empty

  9. Purification of monoclonal antibodies from clarified cell culture fluid using Protein A capture continuous countercurrent tangential chromatography.

    Science.gov (United States)

    Dutta, Amit K; Tran, Travis; Napadensky, Boris; Teella, Achyuta; Brookhart, Gary; Ropp, Philip A; Zhang, Ada W; Tustian, Andrew D; Zydney, Andrew L; Shinkazh, Oleg

    2015-11-10

    Recent studies using simple model systems have demonstrated that continuous countercurrent tangential chromatography (CCTC) has the potential to overcome many of the limitations of conventional Protein A chromatography using packed columns. The objective of this work was to optimize and implement a CCTC system for monoclonal antibody purification from clarified Chinese Hamster Ovary (CHO) cell culture fluid using a commercial Protein A resin. Several improvements were introduced to the previous CCTC system including the use of retentate pumps to maintain stable resin concentrations in the flowing slurry, the elimination of a slurry holding tank to improve productivity, and the introduction of an "after binder" to the binding step to increase antibody recovery. A kinetic binding model was developed to estimate the required residence times in the multi-stage binding step to optimize yield and productivity. Data were obtained by purifying two commercial antibodies from two different manufactures, one with low titer (∼ 0.67 g/L) and one with high titer (∼ 6.9 g/L), demonstrating the versatility of the CCTC system. Host cell protein removal, antibody yields and purities were similar to those obtained with conventional column chromatography; however, the CCTC system showed much higher productivity. These results clearly demonstrate the capabilities of continuous countercurrent tangential chromatography for the commercial purification of monoclonal antibody products. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Purification of monoclonal antibodies from clarified cell culture fluid using Protein A capture continuous countercurrent tangential chromatography

    Science.gov (United States)

    Dutta, Amit K.; Tran, Travis; Napadensky, Boris; Teella, Achyuta; Brookhart, Gary; Ropp, Philip A.; Zhang, Ada W.; Tustian, Andrew D.; Zydney, Andrew L.; Shinkazh, Oleg

    2015-01-01

    Recent studies using simple model systems have demonstrated that Continuous Countercurrent Tangential Chromatography (CCTC) has the potential to overcome many of the limitations of conventional Protein A chromatography using packed columns. The objective of this work was to optimize and implement a CCTC system for monoclonal antibody purification from clarified Chinese Hamster Ovary (CHO) cell culture fluid using a commercial Protein A resin. Several improvements were introduced to the previous CCTC system including the use of retentate pumps to maintain stable resin concentrations in the flowing slurry, the elimination of a slurry holding tank to improve productivity, and the introduction of an “after binder” to the binding step to increase antibody recovery. A kinetic binding model was developed to estimate the required residence times in the multi-stage binding step to optimize yield and productivity. Data were obtained by purifying two commercial antibodies from two different manufactures, one with low titer (~0.67 g/L) and one with high titer (~6.9 g/L), demonstrating the versatility of the CCTC system. Host cell protein removal, antibody yields and purities were similar to that obtained with conventional column chromatography; however, the CCTC system showed much higher productivity. These results clearly demonstrate the capabilities of continuous countercurrent tangential chromatography for the commercial purification of monoclonal antibody products. PMID:25747172

  11. Relationship between linear velocity and tangential push force while turning to change the direction of the manual wheelchair.

    Science.gov (United States)

    Hwang, Seonhong; Lin, Yen-Sheng; Hogaboom, Nathan S; Wang, Lin-Hwa; Koontz, Alicia M

    2017-08-28

    Wheelchair propulsion is a major cause of upper limb pain and injuries for manual wheelchair users with spinal cord injuries (SCIs). Few studies have investigated wheelchair turning biomechanics on natural ground surfaces. The purpose of this study was to investigate the relationship between tangential push force and linear velocity of the wheelchair during the turning portions of propulsion. Using an instrumented handrim, velocity and push force data were recorded for 25 subjects while they propel their own wheelchairs on a concrete floor along a figure-eight-shaped course at a maximum velocity. The braking force (1.03 N) of the inside wheel while turning was the largest of all other push forces (p<0.05). Larger changes in squared velocity while turning were significantly correlated with higher propulsive and braking forces used at the pre-turning, turning, and post-turning phases (p<0.05). Subjects with less change of velocity while turning needed less braking force to maneuver themselves successfully and safely around the turns. Considering the magnitude and direction of tangential force applied to the wheel, it seems that there are higher risks of injury and instability for upper limb joints when braking the inside wheel to turn. The results provide insight into wheelchair setup and mobility skills training for wheelchair users.

  12. Assumptions and Policy Decisions for Vital Area Identification Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myungsu; Bae, Yeon-Kyoung; Lee, Youngseung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    U.S. Nuclear Regulatory Commission and IAEA guidance indicate that certain assumptions and policy questions should be addressed to a Vital Area Identification (VAI) process. Korea Hydro and Nuclear Power conducted a VAI based on current Design Basis Threat and engineering judgement to identify APR1400 vital areas. Some of the assumptions were inherited from Probabilistic Safety Assessment (PSA) as a sabotage logic model was based on PSA logic tree and equipment location data. This paper illustrates some important assumptions and policy decisions for APR1400 VAI analysis. Assumptions and policy decisions could be overlooked at the beginning stage of VAI, however they should be carefully reviewed and discussed among engineers, plant operators, and regulators. Through APR1400 VAI process, some of the policy concerns and assumptions for analysis were applied based on document research and expert panel discussions. It was also found that there are more assumptions to define for further studies for other types of nuclear power plants. One of the assumptions is mission time, which was inherited from PSA.

  13. Underlying assumptions and core beliefs in anorexia nervosa and dieting.

    Science.gov (United States)

    Cooper, M; Turner, H

    2000-06-01

    To investigate assumptions and beliefs in anorexia nervosa and dieting. The Eating Disorder Belief Questionnaire (EDBQ), was administered to patients with anorexia nervosa, dieters and female controls. The patients scored more highly than the other two groups on assumptions about weight and shape, assumptions about eating and negative self-beliefs. The dieters scored more highly than the female controls on assumptions about weight and shape. The cognitive content of anorexia nervosa (both assumptions and negative self-beliefs) differs from that found in dieting. Assumptions about weight and shape may also distinguish dieters from female controls.

  14. Lung and heart dose volume analyses with CT simulator in tangential field irradiation of breast cancer

    International Nuclear Information System (INIS)

    Das, Indra J.; Cheng, Elizabeth C.; Fowble, Barbara

    1997-01-01

    Objective: Radiation pneumonitis and cardiac effects are directly related to the irradiated lung and heart volumes in the treatment fields. The central lung distance (CLD) from a tangential breast radiograph is shown to be a significant indicator of ipsilateral irradiated lung volume based on empirically derived functions which accuracy depends on the actual measured volume in treatment position. A simple and accurate linear relationship with CLD and retrospective analysis of the pattern of dose volume of lung and heart is presented with actual volume data from a CT simulator in the treatment of breast cancer. Materials and Methods: The heart and lung volumes in the tangential treatment fields were analyzed in 45 consecutive (22 left and 23 right breast) patients referred for CT simulation of the cone down treatment. All patients in this study were immobilized and placed on an inclined breast board in actual treatment setup. Both arms were stretched over head uniformly to avoid collision with the scanner aperture. Radiopaque marks were placed on the medial and lateral borders of the tangential fields. All patients were scanned in spiral mode with slice width and thickness of 3 mm each, respectively. The lung and heart structures as well as irradiated areas were delineated on each slice and respective volumes were accurately measured. The treatment beam parameters were recorded and the digitally reconstructed radiographs (DRRs) were generated for the CLD and analysis. Results: Table 1 shows the volume statistics of patients in this study. There is a large variation in the lung and heart volumes among patients. Due to differences in the shape of right and left lungs the percent irradiated volume (PIV) are different. The PIV data have shown to correlate with CLD with 2nd and 3rd degree polynomials; however, in this study a simple straight line regression is used to provide better confidence than the higher order polynomial. The regression lines for the left and right

  15. Vortex stability in a multi-layer quasi-geostrophic model: application to Mediterranean Water eddies

    Energy Technology Data Exchange (ETDEWEB)

    Carton, Xavier; Ménesguen, Claire; Meunier, Thomas [Laboratoire de Physique des Oceans, UBO/IFREMER/CNRS/IRD, Brest (France); Sokolovskiy, Mikhail [Institute of Water Problems of the RAS, Moscow (Russian Federation); Aguiar, Ana, E-mail: xcarton@univ-brest.fr [Instituto Dom Luiz, Universidade de Lisboa, Lisbon (Portugal)

    2014-12-01

    The stability of circular vortices to normal mode perturbations is studied in a multi-layer quasi-geostrophic model. The stratification is fitted on the Gulf of Cadiz where many Mediterranean Water (MW) eddies are generated. Observations of MW eddies are used to determine the parameters of the reference experiment; sensitivity tests are conducted around this basic case. The objective of the study is two-fold: (a) determine the growth rates and nonlinear evolutions of unstable perturbations for different three-dimensional (3D) velocity structures of the vortices, (b) check if the different structure of our idealized vortices, mimicking MW cyclones and anticyclones, can induce different stability properties in a model that conserves parity symmetry, and apply these results to observed MW eddies. The linear stability analysis reveals that, among many 3D distributions of velocity, the observed eddies are close to maximal stability, with instability time scales longer than 100 days (these time scales would be less than 10 days for vertically more sheared eddies). The elliptical deformation is most unstable for realistic eddies (the antisymmetric one dominates for small eddies and the triangular one for large eddies); the antisymmetric mode is stronger for cyclones than for anticyclones. Nonlinear evolutions of eddies with radii of about 30 km, and elliptically perturbed, lead to their re-organization into 3D tripoles; smaller eddies are stable and larger eddies break into 3D dipoles. Horizontally more sheared eddies are more unstable and sustain more asymmetric instabilities. In summary, few differences were found between cyclone and anticyclone stability, except for strong horizontal velocity shears. (paper)

  16. Vortex stability in a multi-layer quasi-geostrophic model: application to Mediterranean Water eddies

    International Nuclear Information System (INIS)

    Carton, Xavier; Ménesguen, Claire; Meunier, Thomas; Sokolovskiy, Mikhail; Aguiar, Ana

    2014-01-01

    The stability of circular vortices to normal mode perturbations is studied in a multi-layer quasi-geostrophic model. The stratification is fitted on the Gulf of Cadiz where many Mediterranean Water (MW) eddies are generated. Observations of MW eddies are used to determine the parameters of the reference experiment; sensitivity tests are conducted around this basic case. The objective of the study is two-fold: (a) determine the growth rates and nonlinear evolutions of unstable perturbations for different three-dimensional (3D) velocity structures of the vortices, (b) check if the different structure of our idealized vortices, mimicking MW cyclones and anticyclones, can induce different stability properties in a model that conserves parity symmetry, and apply these results to observed MW eddies. The linear stability analysis reveals that, among many 3D distributions of velocity, the observed eddies are close to maximal stability, with instability time scales longer than 100 days (these time scales would be less than 10 days for vertically more sheared eddies). The elliptical deformation is most unstable for realistic eddies (the antisymmetric one dominates for small eddies and the triangular one for large eddies); the antisymmetric mode is stronger for cyclones than for anticyclones. Nonlinear evolutions of eddies with radii of about 30 km, and elliptically perturbed, lead to their re-organization into 3D tripoles; smaller eddies are stable and larger eddies break into 3D dipoles. Horizontally more sheared eddies are more unstable and sustain more asymmetric instabilities. In summary, few differences were found between cyclone and anticyclone stability, except for strong horizontal velocity shears. (paper)

  17. The tangential breast match plane: Practical problems and solutions

    International Nuclear Information System (INIS)

    Norris, M.

    1989-01-01

    The three-field breast set-up, in which tangential oblique opposed fields are joined to an anterior supraclavicular field, has been the method of choice for treatment of breast cancer for many years. In the last several years many authors have suggested refinements to the technique that improve the accuracy with which fields join at a match plane. The three-field breast set-up, using a rotatable half-beam block is the technique used at our institution. In instituting this procedure, several practical problems were encountered. Due to the small collimator rotation angles used it is possible to clinically reverse the collimator angle without observing an error noticeable on fluoroscopy. A second error can occur when the table base angle is used to compensate for the incorrect collimator rotation. These potential sources of error can be avoided if a programmable calculator or computer program is used to assist the dosimetrist during the simulation. Utilization of fluoroscopy, digital table position displays and a caliper provide accurate input for the computer program. This paper will present a hybrid procedure that combines practical set-up procedures with the mathematical calculation of ideal angles to result in an accurate and practical approach to breast simulation

  18. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    We propose a class of finite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the ...

  19. HYPROLOG: A New Logic Programming Language with Assumptions and Abduction

    DEFF Research Database (Denmark)

    Christiansen, Henning; Dahl, Veronica

    2005-01-01

    We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars. The lan......We present HYPROLOG, a novel integration of Prolog with assumptions and abduction which is implemented in and partly borrows syntax from Constraint Handling Rules (CHR) for integrity constraints. Assumptions are a mechanism inspired by linear logic and taken over from Assumption Grammars....... The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...

  20. Microalgae fractionation using steam explosion, dynamic and tangential cross-flow membrane filtration.

    Science.gov (United States)

    Lorente, E; Hapońska, M; Clavero, E; Torras, C; Salvadó, J

    2017-08-01

    In this study, the microalga Nannochloropsis gaditana was subjected to acid catalysed steam explosion treatment and the resulting exploded material was subsequently fractionated to separate the different fractions (lipids, sugars and solids). Conventional and vibrational membrane setups were used with several polymeric commercial membranes. Two different routes were followed: 1) filtration+lipid solvent extraction and 2) lipid solvent extraction+filtration. Route 1 revealed to be much better since the used membrane for filtration was able to permeate the sugar aqueous phase and retained the fraction containing lipids; after this, an extraction required a much lower amount of solvent and a better recovering yield. Filtration allowed complete lipid rejection. Dynamic filtration improved permeability compared to the tangential cross-flow filtration. Best membrane performance was achieved using a 5000Da membrane with the dynamic system, obtaining a permeability of 6L/h/m 2 /bar. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Full-wave analysis using a tangential vector finite-element formulation of arbitrary cross-section transmission lines for millimeter and microwave applications

    Science.gov (United States)

    Helal, M.; Legier, J. F.; Pribetich, P.; Kennis, P.

    1994-06-01

    A tangential vector finite-element formulation is implemented to deal with arbitrary cross section and metallic strip shape. Classical planar transmission lines as well as nonconventional cross-section waveguides such as the new microshield line are treated. Effects on propagation characteristics for these lines are studied when the metallization shape is approximated by a lossy trapezoid area.

  2. Tangential Flow Ultrafiltration Allows Purification and Concentration of Lauric Acid-/Albumin-Coated Particles for Improved Magnetic Treatment.

    Science.gov (United States)

    Zaloga, Jan; Stapf, Marcus; Nowak, Johannes; Pöttler, Marina; Friedrich, Ralf P; Tietze, Rainer; Lyer, Stefan; Lee, Geoffrey; Odenbach, Stefan; Hilger, Ingrid; Alexiou, Christoph

    2015-08-14

    Superparamagnetic iron oxide nanoparticles (SPIONs) are frequently used for drug targeting, hyperthermia and other biomedical purposes. Recently, we have reported the synthesis of lauric acid-/albumin-coated iron oxide nanoparticles SEON(LA-BSA), which were synthesized using excess albumin. For optimization of magnetic treatment applications, SPION suspensions need to be purified of excess surfactant and concentrated. Conventional methods for the purification and concentration of such ferrofluids often involve high shear stress and low purification rates for macromolecules, like albumin. In this work, removal of albumin by low shear stress tangential ultrafiltration and its influence on SEON(LA-BSA) particles was studied. Hydrodynamic size, surface properties and, consequently, colloidal stability of the nanoparticles remained unchanged by filtration or concentration up to four-fold (v/v). Thereby, the saturation magnetization of the suspension can be increased from 446.5 A/m up to 1667.9 A/m. In vitro analysis revealed that cellular uptake of SEON(LA-BSA) changed only marginally. The specific absorption rate (SAR) was not greatly affected by concentration. In contrast, the maximum temperature Tmax in magnetic hyperthermia is greatly enhanced from 44.4 °C up to 64.9 °C by the concentration of the particles up to 16.9 mg/mL total iron. Taken together, tangential ultrafiltration is feasible for purifying and concentrating complex hybrid coated SPION suspensions without negatively influencing specific particle characteristics. This enhances their potential for magnetic treatment.

  3. An experimental and numerical study of endwall heat transfer in a turbine blade cascade including tangential heat conduction analysis

    Science.gov (United States)

    Ratto, Luca; Satta, Francesca; Tanda, Giovanni

    2018-06-01

    This paper presents an experimental and numerical investigation of heat transfer in the endwall region of a large scale turbine cascade. The steady-state liquid crystal technique has been used to obtain the map of the heat transfer coefficient for a constant heat flux boundary condition. In the presence of two- and three-dimensional flows with significant spatial variations of the heat transfer coefficient, tangential heat conduction could lead to error in the heat transfer coefficient determination, since local heat fluxes at the wall-to-fluid interface tend to differ from point to point and surface temperatures to be smoothed out, thus making the uniform-heat-flux boundary condition difficult to be perfectly achieved. For this reason, numerical simulations of flow and heat transfer in the cascade including the effect of tangential heat conduction inside the endwall have been performed. The major objective of numerical simulations was to investigate the influence of wall heat conduction on the convective heat transfer coefficient determined during a nominal iso-flux heat transfer experiment and to interpret possible differences between numerical and experimental heat transfer results. Results were presented and discussed in terms of local Nusselt number and a convenient wall heat flux function for two values of the Reynolds number (270,000 and 960,000).

  4. Quantitative assessment of irradiated lung volume and lung mass in breast cancer patients treated with tangential fields in combination with deep inspiration breath hold (DIBH)

    International Nuclear Information System (INIS)

    Kapp, Karin Sigrid; Zurl, Brigitte; Stranzl, Heidi; Winkler, Peter

    2010-01-01

    Purpose: Comparison of the amount of irradiated lung tissue volume and mass in patients with breast cancer treated with an optimized tangential-field technique with and without a deep inspiration breath-hold (DIBH) technique and its impact on the normal-tissue complication probability (NTCP). Material and Methods: Computed tomography datasets of 60 patients in normal breathing (NB) and subsequently in DIBH were compared. With a Real-Time Position Management Respiratory Gating System (RPM), anteroposterior movement of the chest wall was monitored and a lower and upper threshold were defined. Ipsilateral lung and a restricted tangential region of the lung were delineated and the mean and maximum doses calculated. Irradiated lung tissue mass was computed based on density values. NTCP for lung was calculated using a modified Lyman-Kutcher-Burman (LKB) model. Results: Mean dose to the ipsilateral lung in DIBH versus NB was significantly reduced by 15%. Mean lung mass calculation in the restricted area receiving ≤ 20 Gy (M 20 ) was reduced by 17% in DIBH but associated with an increase in volume. NTCP showed an improvement in DIBH of 20%. The correlation of individual breathing amplitude with NTCP proved to be independent. Conclusion: The delineation of a restricted area provides the lung mass calculation in patients treated with tangential fields. DIBH reduces ipsilateral lung dose by inflation so that less tissue remains in the irradiated region and its efficiency is supported by a decrease of NTCP. (orig.)

  5. Occupancy estimation and the closure assumption

    Science.gov (United States)

    Rota, Christopher T.; Fletcher, Robert J.; Dorazio, Robert M.; Betts, Matthew G.

    2009-01-01

    1. Recent advances in occupancy estimation that adjust for imperfect detection have provided substantial improvements over traditional approaches and are receiving considerable use in applied ecology. To estimate and adjust for detectability, occupancy modelling requires multiple surveys at a site and requires the assumption of 'closure' between surveys, i.e. no changes in occupancy between surveys. Violations of this assumption could bias parameter estimates; however, little work has assessed model sensitivity to violations of this assumption or how commonly such violations occur in nature. 2. We apply a modelling procedure that can test for closure to two avian point-count data sets in Montana and New Hampshire, USA, that exemplify time-scales at which closure is often assumed. These data sets illustrate different sampling designs that allow testing for closure but are currently rarely employed in field investigations. Using a simulation study, we then evaluate the sensitivity of parameter estimates to changes in site occupancy and evaluate a power analysis developed for sampling designs that is aimed at limiting the likelihood of closure. 3. Application of our approach to point-count data indicates that habitats may frequently be open to changes in site occupancy at time-scales typical of many occupancy investigations, with 71% and 100% of species investigated in Montana and New Hampshire respectively, showing violation of closure across time periods of 3 weeks and 8 days respectively. 4. Simulations suggest that models assuming closure are sensitive to changes in occupancy. Power analyses further suggest that the modelling procedure we apply can effectively test for closure. 5. Synthesis and applications. Our demonstration that sites may be open to changes in site occupancy over time-scales typical of many occupancy investigations, combined with the sensitivity of models to violations of the closure assumption, highlights the importance of properly addressing

  6. Estimates of gradient Richardson numbers from vertically smoothed data in the Gulf Stream region

    Directory of Open Access Journals (Sweden)

    Paul van Gastel

    2004-12-01

    Full Text Available We use several hydrographic and velocity sections crossing the Gulf Stream to examine how the gradient Richardson number, Ri, is modified due to both vertical smoothing of the hydrographic and/or velocity fields and the assumption of parallel or geostrophic flow. Vertical smoothing of the original (25 m interval velocity field leads to a substantial increase in the Ri mean value, of the same order as the smoothing factor, while its standard deviation remains approximately constant. This contrasts with very minor changes in the distribution of the Ri values due to vertical smoothing of the density field over similar lengths. Mean geostrophic Ri values remain always above the actual unsmoothed Ri values, commonly one to two orders of magnitude larger, but the standard deviation is typically a factor of five larger in geostrophic than in actual Ri values. At high vertical wavenumbers (length scales below 3 m the geostrophic shear only leads to near critical conditions in already rather mixed regions. At these scales, hence, the major contributor to shear mixing is likely to come from the interaction of the background flow with internal waves. At low vertical wavenumbers (scales above 25 m the ageostrophic motions provide the main source for shear, with cross-stream movements having a minor but non-negligible contribution. These large-scale motions may be associated with local accelerations taking place during frontogenetic phases of meanders.

  7. Contextuality under weak assumptions

    International Nuclear Information System (INIS)

    Simmons, Andrew W; Rudolph, Terry; Wallman, Joel J; Pashayan, Hakop; Bartlett, Stephen D

    2017-01-01

    The presence of contextuality in quantum theory was first highlighted by Bell, Kochen and Specker, who discovered that for quantum systems of three or more dimensions, measurements could not be viewed as deterministically revealing pre-existing properties of the system. More precisely, no model can assign deterministic outcomes to the projectors of a quantum measurement in a way that depends only on the projector and not the context (the full set of projectors) in which it appeared, despite the fact that the Born rule probabilities associated with projectors are independent of the context. A more general, operational definition of contextuality introduced by Spekkens, which we will term ‘probabilistic contextuality’, drops the assumption of determinism and allows for operations other than measurements to be considered contextual. Even two-dimensional quantum mechanics can be shown to be contextual under this generalised notion. Probabilistic noncontextuality represents the postulate that elements of an operational theory that cannot be distinguished from each other based on the statistics of arbitrarily many repeated experiments (they give rise to the same operational probabilities) are ontologically identical. In this paper, we introduce a framework that enables us to distinguish between different noncontextuality assumptions in terms of the relationships between the ontological representations of objects in the theory given a certain relation between their operational representations. This framework can be used to motivate and define a ‘possibilistic’ analogue, encapsulating the idea that elements of an operational theory that cannot be unambiguously distinguished operationally can also not be unambiguously distinguished ontologically. We then prove that possibilistic noncontextuality is equivalent to an alternative notion of noncontextuality proposed by Hardy. Finally, we demonstrate that these weaker noncontextuality assumptions are sufficient to prove

  8. 40 CFR 265.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ..., STORAGE, AND DISPOSAL FACILITIES Financial Requirements § 265.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  9. 40 CFR 144.66 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) UNDERGROUND INJECTION CONTROL PROGRAM Financial Responsibility: Class I Hazardous Waste Injection Wells § 144.66 State assumption of responsibility. (a) If a State either assumes legal... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State assumption of responsibility...

  10. Dosimetric comparison of treatment planning systems in irradiation of breast with tangential fields

    International Nuclear Information System (INIS)

    Cheng, C.-W.; Das, Indra J.; Tang, Walter; Chang Sha; Tsai, J.-S.; Ceberg, Crister; Gaspie, Barbara de; Singh, Rajinder; Fein, Douglas A.; Fowble, Barbara

    1997-01-01

    Purpose: The objectives of this study are: (1) to investigate the dosimetric differences of the different treatment planning systems (TPS) in breast irradiation with tangential fields, and (2) to study the effect of beam characteristics on dose distributions in tangential breast irradiation with 6 MV linear accelerators from different manufacturers. Methods and Materials: Nine commercial and two university-based TPS are evaluated in this study. The computed tomographic scan of three representative patients, labeled as 'small', 'medium' and 'large' based on their respective chest wall separations in the central axis plane (CAX) were used. For each patient, the tangential fields were set up in each TPS. The CAX distribution was optimized separately with lung correction, for each TPS based on the same set of optimization conditions. The isodose distributions in two other off-axis planes, one 6 cm cephalic and the other 6 cm caudal to the CAX plane were also computed. To investigate the effect of beam characteristics on dose distributions, a three-dimensional TPS was used to calculate the isodose distributions for three different linear accelerators, the Varian Clinac 6/100, the Siemens MD2 and the Philips SL/7 for the three patients. In addition, dose distributions obtained with 6 MV X-rays from two different accelerators, the Varian Clinac 6/100 and the Varian 2100C, were compared. Results: For all TPS, the dose distributions in all three planes agreed qualitatively to within ± 5% for the 'small' and the 'medium' patients. For the 'large' patient, all TPS agreed to within ± 4% on the CAX plane. The isodose distributions in the caudal plane differed by ± 5% among all TPS. In the cephalic plane in which the patient separation is much larger than that in the CAX plane, six TPS correctly calculated the dose distribution showing a cold spot in the center of the breast contour. The other five TPS showed that the center of the breast received adequate dose. Isodose

  11. 40 CFR 264.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... FACILITIES Financial Requirements § 264.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure, post-closure care, or... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  12. 40 CFR 261.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... Excluded Hazardous Secondary Materials § 261.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure or liability... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  13. 40 CFR 267.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... STANDARDIZED PERMIT Financial Requirements § 267.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure care or liability... 40 Protection of Environment 26 2010-07-01 2010-07-01 false State assumption of responsibility...

  14. Measuring irradiated lung and heart area in breast tangential fields using a simulator-based computerized tomography device

    International Nuclear Information System (INIS)

    Mallik, Raj; Fowler, Allan; Hunt, Peter

    1995-01-01

    Purpose: To illustrate the use of a simulator based computerized tomography system (SIMCT) in the simulation and planning of tangential breast fields. Methods and Materials: Forty-five consecutive patients underwent treatment planning using a radiotherapy simulator with computerized tomography attachment. One to three scans were obtained for each patient, calculations were made on the central axis scan. Due to the wide aperture of this system all patients were able to be scanned in the desired treatment position with arm abducted 90 deg. . Using available software tools the area of lung and/or heart included within the tangential fields was calculated. The greatest perpendicular distance (GPD) from the chest wall to posterior field edge was also measured. Results: The mean GPD for the group was 25.40 mm with 71% of patients having GPDs of ≤ 30 mm. The mean area of irradiated lung was 1780 sq mm which represented 18.0% of the total ipsilateral lung area seen in the central axis. Seven of the patients with left sided tumors had an average 1314 sq mm heart irradiated in the central axis. This represented 11.9% of total heart area in these patients. Conclusion: Measurements of irradiated lung and heart area can be easily and accurately made using a SIMCT device. Such measurements may help identify those patients potentially at risk for lung or heart toxicity as a consequence of their treatment. A major advantage of this device is the ability to scan patients in the actual treatment position

  15. Measuring irradiated lung and heart area in breast tangential fields using a simulator-based computerized tomography device

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, Raj; Fowler, Allan; Hunt, Peter

    1995-01-15

    Purpose: To illustrate the use of a simulator based computerized tomography system (SIMCT) in the simulation and planning of tangential breast fields. Methods and Materials: Forty-five consecutive patients underwent treatment planning using a radiotherapy simulator with computerized tomography attachment. One to three scans were obtained for each patient, calculations were made on the central axis scan. Due to the wide aperture of this system all patients were able to be scanned in the desired treatment position with arm abducted 90 deg. . Using available software tools the area of lung and/or heart included within the tangential fields was calculated. The greatest perpendicular distance (GPD) from the chest wall to posterior field edge was also measured. Results: The mean GPD for the group was 25.40 mm with 71% of patients having GPDs of {<=} 30 mm. The mean area of irradiated lung was 1780 sq mm which represented 18.0% of the total ipsilateral lung area seen in the central axis. Seven of the patients with left sided tumors had an average 1314 sq mm heart irradiated in the central axis. This represented 11.9% of total heart area in these patients. Conclusion: Measurements of irradiated lung and heart area can be easily and accurately made using a SIMCT device. Such measurements may help identify those patients potentially at risk for lung or heart toxicity as a consequence of their treatment. A major advantage of this device is the ability to scan patients in the actual treatment position.

  16. A tangential CO{sub 2} laser collective scattering system for measuring short-scale turbulent fluctuations in the EAST superconducting tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Cao, G.M., E-mail: gmcao@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, PO Box 1126, Hefei, Anhui 230031 (China); Li, Y.D. [Institute of Plasma Physics, Chinese Academy of Sciences, PO Box 1126, Hefei, Anhui 230031 (China); Li, Q. [School of Physics and Optoelectronic Engineering, Guangdong University of Technology, Guangzhou 510006 (China); Zhang, X.D.; Sun, P.J.; Wu, G.J.; Hu, L.Q. [Institute of Plasma Physics, Chinese Academy of Sciences, PO Box 1126, Hefei, Anhui 230031 (China)

    2014-12-15

    Highlights: • A tangential CO{sub 2} laser collective scattering system was first installed on EAST. • It can measure the short-scale fluctuations in different regions simultaneously. • It can study the broadband fluctuations, QC fluctuations, MHD phenomenon, etc. - Abstract: A tangential CO{sub 2} laser collective scattering system has been first installed on the Experimental Advanced Superconducting Tokamak (EAST) to measure short-scale turbulent fluctuations in EAST plasmas. The system can measure fluctuations with up to four distinct wavenumbers simultaneously ranging from 10 cm{sup −1} to 26 cm{sup −1}, and correspondingly k{sub ⊥}ρ{sub s}∼1.5−4.3. The system is designed based on the oblique propagation of the probe beam with respect to the magnetic field, and thus the enhanced spatial localization can be achieved by taking full advantage of turbulence anisotropy and magnetic field inhomogeneity. The simultaneous measurements of turbulent fluctuations in different regions can be taken by special optical setup. Initial measurements indicate rich short-scale turbulent dynamics in both core and outer regions of EAST plasmas. The system will be a powerful tool for investigating the features of short-scale turbulent fluctuations in EAST plasmas.

  17. Structure optimization of a grain impact piezoelectric sensor and its application for monitoring separation losses on tangential-axial combine harvesters.

    Science.gov (United States)

    Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang

    2015-01-14

    Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice.

  18. Structure Optimization of a Grain Impact Piezoelectric Sensor and Its Application for Monitoring Separation Losses on Tangential-Axial Combine Harvesters

    Science.gov (United States)

    Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang

    2015-01-01

    Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice. PMID:25594592

  19. Numerical Analysis of a New Pressure Sensor for Measuring Normal and Tangential Stresses in the Roll Gap

    DEFF Research Database (Denmark)

    Presz, Wojtek P.; Wanheim, Tarras

    2003-01-01

    The paper is in polish. Orig. title: "Analiza numeryczna nowego czujnika do pomiaru nacisków i naprê¿eñ stycznych w procesie walcowania" A new strain gauge sensor for measuring normal and tangential stresses in the contact arc of a rolling process has been designed and constructed. The complicated...... load history of the sensor results in complicated deformation patterns of it, and consequently the calibration procedure of the sensor should cover a wide range of loading cases, and would thus be very difficult and time-consuming to carry out. As an alternative to this, a FEM simulative experiment has...

  20. Formalization and Analysis of Reasoning by Assumption

    OpenAIRE

    Bosse, T.; Jonker, C.M.; Treur, J.

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been speci...

  1. Cardiac and pulmonary dose reduction for tangentially irradiated breast cancer, utilizing deep inspiration breath-hold with audio-visual guidance, without compromising target coverage

    International Nuclear Information System (INIS)

    Vikstroem, Johan; Hjelstuen, Mari H.B.; Mjaaland, Ingvil; Dybvik, Kjell Ivar

    2011-01-01

    Background and purpose. Cardiac disease and pulmonary complications are documented risk factors in tangential breast irradiation. Respiratory gating radiotherapy provides a possibility to substantially reduce cardiopulmonary doses. This CT planning study quantifies the reduction of radiation doses to the heart and lung, using deep inspiration breath-hold (DIBH). Patients and methods. Seventeen patients with early breast cancer, referred for adjuvant radiotherapy, were included. For each patient two CT scans were acquired; the first during free breathing (FB) and the second during DIBH. The scans were monitored by the Varian RPM respiratory gating system. Audio coaching and visual feedback (audio-visual guidance) were used. The treatment planning of the two CT studies was performed with conformal tangential fields, focusing on good coverage (V95>98%) of the planning target volume (PTV). Dose-volume histograms were calculated and compared. Doses to the heart, left anterior descending (LAD) coronary artery, ipsilateral lung and the contralateral breast were assessed. Results. Compared to FB, the DIBH-plans obtained lower cardiac and pulmonary doses, with equal coverage of PTV. The average mean heart dose was reduced from 3.7 to 1.7 Gy and the number of patients with >5% heart volume receiving 25 Gy or more was reduced from four to one of the 17 patients. With DIBH the heart was completely out of the beam portals for ten patients, with FB this could not be achieved for any of the 17 patients. The average mean dose to the LAD coronary artery was reduced from 18.1 to 6.4 Gy. The average ipsilateral lung volume receiving more than 20 Gy was reduced from 12.2 to 10.0%. Conclusion. Respiratory gating with DIBH, utilizing audio-visual guidance, reduces cardiac and pulmonary doses for tangentially treated left sided breast cancer patients without compromising the target coverage

  2. Cardiac and pulmonary dose reduction for tangentially irradiated breast cancer, utilizing deep inspiration breath-hold with audio-visual guidance, without compromising target coverage

    Energy Technology Data Exchange (ETDEWEB)

    Vikstroem, Johan; Hjelstuen, Mari H.B.; Mjaaland, Ingvil; Dybvik, Kjell Ivar (Dept. of Radiotherapy, Stavanger Univ. Hospital, Stavanger (Norway)), e-mail: vijo@sus.no

    2011-01-15

    Background and purpose. Cardiac disease and pulmonary complications are documented risk factors in tangential breast irradiation. Respiratory gating radiotherapy provides a possibility to substantially reduce cardiopulmonary doses. This CT planning study quantifies the reduction of radiation doses to the heart and lung, using deep inspiration breath-hold (DIBH). Patients and methods. Seventeen patients with early breast cancer, referred for adjuvant radiotherapy, were included. For each patient two CT scans were acquired; the first during free breathing (FB) and the second during DIBH. The scans were monitored by the Varian RPM respiratory gating system. Audio coaching and visual feedback (audio-visual guidance) were used. The treatment planning of the two CT studies was performed with conformal tangential fields, focusing on good coverage (V95>98%) of the planning target volume (PTV). Dose-volume histograms were calculated and compared. Doses to the heart, left anterior descending (LAD) coronary artery, ipsilateral lung and the contralateral breast were assessed. Results. Compared to FB, the DIBH-plans obtained lower cardiac and pulmonary doses, with equal coverage of PTV. The average mean heart dose was reduced from 3.7 to 1.7 Gy and the number of patients with >5% heart volume receiving 25 Gy or more was reduced from four to one of the 17 patients. With DIBH the heart was completely out of the beam portals for ten patients, with FB this could not be achieved for any of the 17 patients. The average mean dose to the LAD coronary artery was reduced from 18.1 to 6.4 Gy. The average ipsilateral lung volume receiving more than 20 Gy was reduced from 12.2 to 10.0%. Conclusion. Respiratory gating with DIBH, utilizing audio-visual guidance, reduces cardiac and pulmonary doses for tangentially treated left sided breast cancer patients without compromising the target coverage

  3. DDH-Like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike

    2012-01-01

    We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of $\\mathbb{F}_{q}$ “in the exponent” of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring $R_f=......-Reingold style pseudorandom functions, and auxiliary input secure encryption. This can be seen as an alternative to the known family of k-LIN assumptions....

  4. A framework for the organizational assumptions underlying safety culture

    International Nuclear Information System (INIS)

    Packer, Charles

    2002-01-01

    The safety culture of the nuclear organization can be addressed at the three levels of culture proposed by Edgar Schein. The industry literature provides a great deal of insight at the artefact and espoused value levels, although as yet it remains somewhat disorganized. There is, however, an overall lack of understanding of the assumption level of safety culture. This paper describes a possible framework for conceptualizing the assumption level, suggesting that safety culture is grounded in unconscious beliefs about the nature of the safety problem, its solution and how to organize to achieve the solution. Using this framework, the organization can begin to uncover the assumptions at play in its normal operation, decisions and events and, if necessary, engage in a process to shift them towards assumptions more supportive of a strong safety culture. (author)

  5. The Immoral Assumption Effect: Moralization Drives Negative Trait Attributions.

    Science.gov (United States)

    Meindl, Peter; Johnson, Kate M; Graham, Jesse

    2016-04-01

    Jumping to negative conclusions about other people's traits is judged as morally bad by many people. Despite this, across six experiments (total N = 2,151), we find that multiple types of moral evaluations--even evaluations related to open-mindedness, tolerance, and compassion--play a causal role in these potentially pernicious trait assumptions. Our results also indicate that moralization affects negative-but not positive-trait assumptions, and that the effect of morality on negative assumptions cannot be explained merely by people's general (nonmoral) preferences or other factors that distinguish moral and nonmoral traits, such as controllability or desirability. Together, these results suggest that one of the more destructive human tendencies--making negative assumptions about others--can be caused by the better angels of our nature. © 2016 by the Society for Personality and Social Psychology, Inc.

  6. Formalization and analysis of reasoning by assumption.

    Science.gov (United States)

    Bosse, Tibor; Jonker, Catholijn M; Treur, Jan

    2006-01-02

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been specified, some of which are considered characteristic for the reasoning pattern, whereas some other properties can be used to discriminate among different approaches to the reasoning. These properties have been automatically checked for the traces acquired in experiments undertaken. The approach turned out to be beneficial from two perspectives. First, checking characteristic properties contributes to the empirical validation of a theory on reasoning by assumption. Second, checking discriminating properties allows the analyst to identify different classes of human reasoners. 2006 Lawrence Erlbaum Associates, Inc.

  7. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    International Nuclear Information System (INIS)

    R.E. Sweeney

    2001-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  8. The stable model semantics under the any-world assumption

    OpenAIRE

    Straccia, Umberto; Loyer, Yann

    2004-01-01

    The stable model semantics has become a dominating approach to complete the knowledge provided by a logic program by means of the Closed World Assumption (CWA). The CWA asserts that any atom whose truth-value cannot be inferred from the facts and rules is supposed to be false. This assumption is orthogonal to the so-called the Open World Assumption (OWA), which asserts that every such atom's truth is supposed to be unknown. The topic of this paper is to be more fine-grained. Indeed, the objec...

  9. Monitored Geologic Repository Life Cycle Cost Estimate Assumptions Document

    International Nuclear Information System (INIS)

    Sweeney, R.

    2000-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost estimate and schedule update incorporating information from the Viability Assessment (VA), License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  10. CFD analysis of temperature imbalance in superheater/reheater region of tangentially coal-fired boiler

    Science.gov (United States)

    Zainudin, A. F.; Hasini, H.; Fadhil, S. S. A.

    2017-10-01

    This paper presents a CFD analysis of the flow, velocity and temperature distribution in a 700 MW tangentially coal-fired boiler operating in Malaysia. The main objective of the analysis is to gain insights on the occurrences in the boiler so as to understand the inherent steam temperature imbalance problem. The results show that the root cause of the problem comes from the residual swirl in the horizontal pass. The deflection of the residual swirl due to the sudden reduction and expansion of the flow cross-sectional area causes velocity deviation between the left and right side of the boiler. This consequently results in flue gas temperature imbalance which has often caused tube leaks in the superheater/reheater region. Therefore, eliminating the residual swirl or restraining it from being diverted might help to alleviate the problem.

  11. Burning low volatile fuel in tangentially fired furnaces with fuel rich/lean burners

    International Nuclear Information System (INIS)

    Wei Xiaolin; Xu Tongmo; Hui Shien

    2004-01-01

    Pulverized coal combustion in tangentially fired furnaces with fuel rich/lean burners was investigated for three low volatile coals. The burners were operated under the conditions with varied value N d , which means the ratio of coal concentration of the fuel rich stream to that of the fuel lean stream. The wall temperature distributions in various positions were measured and analyzed. The carbon content in the char and NO x emission were detected under various conditions. The new burners with fuel rich/lean streams were utilized in a thermal power station to burn low volatile coal. The results show that the N d value has significant influences on the distributions of temperature and char burnout. There exists an optimal N d value under which the carbon content in the char and the NO x emission is relatively low. The coal ignition and NO x emission in the utilized power station are improved after retrofitting the burners

  12. Wide-angle ITER-prototype tangential infrared and visible viewing system for DIII-D

    Energy Technology Data Exchange (ETDEWEB)

    Lasnier, C. J., E-mail: lasnier@LLNL.gov; Allen, S. L.; Ellis, R. E.; Fenstermacher, M. E.; McLean, A. G.; Meyer, W. H.; Morris, K.; Seppala, L. G. [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, California 94551-0808 (United States); Crabtree, K. [College of Optics, University of Arizona, Tucson, Arizona 85721 (United States); Van Zeeland, M. A. [General Atomics, P.O. Box 85608, San Diego, California 92186-5608 (United States)

    2014-11-15

    An imaging system with a wide-angle tangential view of the full poloidal cross-section of the tokamak in simultaneous infrared and visible light has been installed on DIII-D. The optical train includes three polished stainless steel mirrors in vacuum, which view the tokamak through an aperture in the first mirror, similar to the design concept proposed for ITER. A dichroic beam splitter outside the vacuum separates visible and infrared (IR) light. Spatial calibration is accomplished by warping a CAD-rendered image to align with landmarks in a data image. The IR camera provides scrape-off layer heat flux profile deposition features in diverted and inner-wall-limited plasmas, such as heat flux reduction in pumped radiative divertor shots. Demonstration of the system to date includes observation of fast-ion losses to the outer wall during neutral beam injection, and shows reduced peak wall heat loading with disruption mitigation by injection of a massive gas puff.

  13. Sampling Assumptions in Inductive Generalization

    Science.gov (United States)

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  14. Life Support Baseline Values and Assumptions Document

    Science.gov (United States)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  15. SU-F-T-422: Detection of Optimal Tangential Partial Arc Span for VMAT Planning in IntactLeft-Breast Treatment

    Energy Technology Data Exchange (ETDEWEB)

    Giri, U; Sarkar, B; Munshi, A; Kaur, H; Jassal, K; Rathinamuthu, S; Kumar, S; Ganesh, T; Mohanti, B [Fortis Memorial Research Institute, Gurgaon, Haryana (India)

    2016-06-15

    Purpose: This study was designed to investigate an appropriate arc span for intact partial Left breast irradiation by VMAT planning. Methods: Four cases of carcinoma left intact breast was chosen randomly for this study. Both medial tangential and left-lateral tangential arc (G20°, G25°, G30°, G35°, G40°) were used having the same length and bilaterally symmetric. For each patient base plan was generated for 30° arc and rest of other arc plans were generated by keeping all plan parameters same, only arc span were changed. All patient plans were generated on treatment planning system Monaco (V 5.00.02) for 50 Gy dose in 25 fractions. PTV contours were clipped 3 mm from skin (patient). All plans were normalized in such a way that 95 % of prescription dose would cover 96 % of PTV volume. Results: Mean MU for 20°, 25°, 30°, 35° and 40° were 509 ± 18.8, 529.1 ± 20.2, 544.4 ± 20.8, 579.1 ±51.8, 607.2 ± 40.2 similarly mean hot spot (volume covered by 105% of prescription dose) were 2.9 ± 1.2, 3.7 ± 3.0, 1.5 ± 1.7, 1.3±0.6, 0.4 ± 0.4, mean contralateral breast dose (cGy) were 180.4 ± 242.3, 71.5 ± 52.7, 76.2 ± 58.8, 85.9 ± 70.5, 90.7 ± 70.1, mean heart dose (cGy) were 285.8 ± 87.2, 221.2 ± 62.8, 274.5 ± 95.5, 234.8 ± 73.8, 263.2 ± 81.6, V20 for ipsilateral lung were 15.4 ± 5.3, 14.3 ± 3.6, 15.3 ± 2.9, 14.2 ± 3.9, 14.7 ± 3.2 and V5 for ipsilateral lung were 33.9 ± 8.2, 31.0 ± 3.5, 42.6 ±15.6, 36.4 ± 12.9, 37.0 ± 7.5. Conclusion: The study concluded that appropriate arc span used for tangential intact breast treatment was optimally 30° because larger arc span were giving lower isodose spill in ipsilateral lung and smaller arc were giving heterogeneous dose distribution in PTV.

  16. Legal assumptions for private company claim for additional (supplementary payment

    Directory of Open Access Journals (Sweden)

    Šogorov Stevan

    2011-01-01

    Full Text Available Subject matter of analyze in this article are legal assumptions which must be met in order to enable private company to call for additional payment. After introductory remarks discussion is focused on existence of provisions regarding additional payment in formation contract, or in shareholders meeting general resolution, as starting point for company's claim. Second assumption is concrete resolution of shareholders meeting which creates individual obligations for additional payments. Third assumption is defined as distinctness regarding sum of payment and due date. Sending of claim by relevant company body is set as fourth legal assumption for realization of company's right to claim additional payments from member of private company.

  17. Tangential vs. defined radiotherapy in early breast cancer treatment without axillary lymph node dissection. A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Nitsche, Mirko [Zentrum fuer Strahlentherapie und Radioonkologie, Bremen (Germany); Universitaet Kiel, Klinik fuer Strahlentherapie, Karl-Lennert-Krebscentrum, Kiel (Germany); Temme, Nils; Foerster, Manuela; Reible, Michael [Zentrum fuer Strahlentherapie und Radioonkologie, Bremen (Germany); Hermann, Robert Michael [Zentrum fuer Strahlentherapie und Radioonkologie, Bremen (Germany); Medizinische Hochschule Hannover, Abteilung Strahlentherapie und Spezielle Onkologie, Hannover (Germany)

    2014-08-15

    Recent studies have demonstrated low regional recurrence rates in early-stage breast cancer omitting axillary lymph node dissection (ALND) in patients who have positive nodes in sentinel lymph node dissection (SLND). This finding has triggered an active discussion about the effect of radiotherapy within this approach. The purpose of this study was to analyze the dose distribution in the axilla in standard tangential radiotherapy (SRT) for breast cancer and the effects on normal tissue exposure when anatomic level I-III axillary lymph node areas are included in the tangential radiotherapy field configuration. We prospectively analyzed the dosimetric treatment plans from 51 consecutive women with early-stage breast cancer undergoing radiotherapy. We compared and analyzed the SRT and the defined radiotherapy (DRT) methods for each patient. The clinical target volume (CTV) of SRT included the breast tissue without specific contouring of lymph node areas, whereas the CTV of DRT included the level I-III lymph node areas. We evaluated the dose given in SRT covering the axillary lymph node areas of level I-III as contoured in DRT. The mean V{sub D95} {sub %} of the entire level I-III lymph node area in SRT was 50.28 % (range, 37.31-63.24 %), V{sub D45} {sub Gy} was 70.1 % (54.8-85.4 %), and V{sub D40} {sub Gy} was 83.5 % (72.3-94.8 %). A significant difference was observed between lung dose and heart toxicity in SRT vs. DRT. The V{sub 20} {sub Gy} and V{sub 30} {sub Gy} of the right and the left lung in DRT were significantly higher in DRT than in SRT (p < 0.001). The mean heart dose in SRT was significantly lower (3.93 vs. 4.72 Gy, p = 0.005). We demonstrated a relevant dose exposure of the axilla in SRT that should substantially reduce local recurrences. Furthermore, we demonstrated a significant increase in lung and heart exposure when including the axillary lymph nodes regions in the tangential radiotherapy field set-up. (orig.) [German] Aktuelle Studien zeigen

  18. Idaho National Engineering Laboratory installation roadmap assumptions document

    International Nuclear Information System (INIS)

    1993-05-01

    This document is a composite of roadmap assumptions developed for the Idaho National Engineering Laboratory (INEL) by the US Department of Energy Idaho Field Office and subcontractor personnel as a key element in the implementation of the Roadmap Methodology for the INEL Site. The development and identification of these assumptions in an important factor in planning basis development and establishes the planning baseline for all subsequent roadmap analysis at the INEL

  19. Experimental and finite element study of the effect of temperature and moisture on the tangential tensile strength and fracture behavior in timber logs

    DEFF Research Database (Denmark)

    Larsen, Finn; Ormarsson, Sigurdur

    2014-01-01

    Timber is normally dried by kiln drying, in the course of which moisture-induced stresses and fractures can occur. Cracks occur primarily in the radial direction due to tangential tensile strength (TSt) that exceeds the strength of the material. The present article reports on experiments and nume......Timber is normally dried by kiln drying, in the course of which moisture-induced stresses and fractures can occur. Cracks occur primarily in the radial direction due to tangential tensile strength (TSt) that exceeds the strength of the material. The present article reports on experiments...... and numerical simulations by finite element modeling (FEM) concerning the TSt and fracture behavior of Norway spruce under various climatic conditions. Thin log disc specimens were studied to simplify the description of the moisture flow in the samples. The specimens designed for TS were acclimatized...... to a moisture content (MC) of 18% before TSt tests at 20°C, 60°C, and 90°C were carried out. The maximum stress results of the disc simulations by FEM were compared with the experimental strength results at the same temperature levels. There is a rather good agreement between the results of modeling...

  20. Deep Borehole Field Test Requirements and Controlled Assumptions.

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  1. School Principals' Assumptions about Human Nature: Implications for Leadership in Turkey

    Science.gov (United States)

    Sabanci, Ali

    2008-01-01

    This article considers principals' assumptions about human nature in Turkey and the relationship between the assumptions held and the leadership style adopted in schools. The findings show that school principals hold Y-type assumptions and prefer a relationship-oriented style in their relations with assistant principals. However, both principals…

  2. Major Assumptions of Mastery Learning.

    Science.gov (United States)

    Anderson, Lorin W.

    Mastery learning can be described as a set of group-based, individualized, teaching and learning strategies based on the premise that virtually all students can and will, in time, learn what the school has to teach. Inherent in this description are assumptions concerning the nature of schools, classroom instruction, and learners. According to the…

  3. 7 CFR 772.10 - Transfer and assumption-AMP loans.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Transfer and assumption-AMP loans. 772.10 Section 772..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS SERVICING MINOR PROGRAM LOANS § 772.10 Transfer and assumption—AMP loans. (a) Eligibility. The Agency may approve transfers and assumptions of AMP loans when: (1) The...

  4. Flurbiprofen Axetil Enhances Analgesic Effects of Sufentanil and Attenuates Postoperative Emergence Agitation and Systemic Proinflammation in Patients Undergoing Tangential Excision Surgery

    Directory of Open Access Journals (Sweden)

    Wujun Geng

    2015-01-01

    Full Text Available Objective. Our present study tested whether flurbiprofen axetil could reduce perioperative sufentanil consumption and provide postoperative analgesia with decrease in emergency agitation and systemic proinflammatory cytokines release. Methods. Ninety patients undergoing tangential excision surgery were randomly assigned to three groups: (1 preoperative dose of 100 mg flurbiprofen axetil and a postoperative dose of 2 μg/kg sufentanil and 10 mL placebo by patient-controlled analgesia (PCA pump, (2 preoperative dose of 100 mg flurbiprofen axetil and a postoperative dose of 2 μg/kg sufentanil and 100 mg flurbiprofen axetil by PCA pump, and (3 10 mL placebo and a postoperative dose of 2 μg/kg sufentanil and 10 mL placebo by PCA pump. Results. Preoperative administration of flurbiprofen axetil decreased postoperative tramadol consumption and the visual analog scale at 4, 6, 12, and 24 h after surgery, which were further decreased by postoperative administration of flurbiprofen axetil. Furthermore, flurbiprofen axetil attenuated emergency agitation score and Ramsay score at 0, 5, and 10 min after extubation and reduced the TNF-α and interleukin- (IL- 6 levels at 24 and 48 h after the operation. Conclusion. Flurbiprofen axetil enhances analgesic effects of sufentanil and attenuates emergence agitation and systemic proinflammation in patients undergoing tangential excision surgery.

  5. Flurbiprofen Axetil Enhances Analgesic Effects of Sufentanil and Attenuates Postoperative Emergence Agitation and Systemic Proinflammation in Patients Undergoing Tangential Excision Surgery.

    Science.gov (United States)

    Geng, Wujun; Hong, Wandong; Wang, Junlu; Dai, Qinxue; Mo, Yunchang; Shi, Kejian; Sun, Jiehao; Qin, Jinling; Li, Mei; Tang, Hongli

    2015-01-01

    Our present study tested whether flurbiprofen axetil could reduce perioperative sufentanil consumption and provide postoperative analgesia with decrease in emergency agitation and systemic proinflammatory cytokines release. Ninety patients undergoing tangential excision surgery were randomly assigned to three groups: (1) preoperative dose of 100 mg flurbiprofen axetil and a postoperative dose of 2 μg/kg sufentanil and 10 mL placebo by patient-controlled analgesia (PCA) pump, (2) preoperative dose of 100 mg flurbiprofen axetil and a postoperative dose of 2 μg/kg sufentanil and 100 mg flurbiprofen axetil by PCA pump, and (3) 10 mL placebo and a postoperative dose of 2 μg/kg sufentanil and 10 mL placebo by PCA pump. Preoperative administration of flurbiprofen axetil decreased postoperative tramadol consumption and the visual analog scale at 4, 6, 12, and 24 h after surgery, which were further decreased by postoperative administration of flurbiprofen axetil. Furthermore, flurbiprofen axetil attenuated emergency agitation score and Ramsay score at 0, 5, and 10 min after extubation and reduced the TNF-α and interleukin- (IL-) 6 levels at 24 and 48 h after the operation. Flurbiprofen axetil enhances analgesic effects of sufentanil and attenuates emergence agitation and systemic proinflammation in patients undergoing tangential excision surgery.

  6. Status of the tangentially fired LIMB Demonstration Program at Yorktown Unit No. 2: An update

    International Nuclear Information System (INIS)

    Clark, J.P.; Gogineni, M.R.; Koucky, R.W.; Gootzait, E.; Lachapelle, D.G.

    1992-01-01

    Combustion Engineering, Inc., under EPA sponsorship, is conducting a program to demonstrate furnace sorbent injection on a tangentially fired, coal-burning utility boiler, Virginia Power's 180 MW(e) Yorktown Unit No. 2. The overall objective of the program is to demonstrate significant reductions in sulfur dioxide (SO 2 ) and nitrogen oxides (NO x ) while minimizing any negative impacts on boiler performance. Engineering and procurement activities and baseline testing have been completed. Construction and installation of the sorbent injection and low-NO x equipment is nearly complete. An 8-month demonstration of furnace sorbent injection plus flue gas humidification will be conducted in 1992. Details of the sorbent injection concept to be tested at Yorktown, results of baseline testing, overall demonstration program organization and schedule, and preliminary plans for the 8-month demonstration test are discussed in the paper

  7. Dosimetric improvements following 3D planning of tangential breast irradiation

    International Nuclear Information System (INIS)

    Aref, Amr; Thornton, Dale; Youssef, Emad; He, Tony; Tekyi-Mensah, Samuel; Denton, Lori; Ezzell, Gary

    2000-01-01

    Purpose: To evaluate the dosimetric difference between a simple radiation therapy plan utilizing a single contour and a more complex three-dimensional (3D) plan utilizing multiple contours, lung inhomogeneity correction, and dose-based compensators. Methods and Materials: This is a study of the radiation therapy (RT) plans of 85 patients with early breast cancer. All patients were considered for breast-conserving management and treated by conventional tangential fields technique. Two plans were generated for each patient. The first RT plan was based on a single contour taken at the central axis and utilized two wedges. The second RT plan was generated by using the 3D planning system to design dose-based compensators after lung inhomogeneity correction had been made. The endpoints of the study were the comparison between the volumes receiving greater than 105% and greater than 110% of the reference dose, as well as the magnitude of the treated volume maximum dose. Dosimetric improvement was defined to be of significant value if the volume receiving > 105% of one plan was reduced by at least 50% with the absolute difference between the volumes being 5% or greater. The dosimetric improvements in 49 3D plans (58%) were considered of significant value. Patients' field separation and breast size did not predict the magnitude of improvement in dosimetry. Conclusion: Dose-based compensator plans significantly reduced the volumes receiving > 105%, >110%, and volume maximum dose.

  8. Purification of infectious adenovirus in two hours by ultracentrifugation and tangential flow filtration

    International Nuclear Information System (INIS)

    Ugai, Hideyo; Yamasaki, Takahito; Hirose, Megumi; Inabe, Kumiko; Kujime, Yukari; Terashima, Miho; Liu, Bingbing; Tang, Hong; Zhao, Mujun; Murata, Takehide; Kimura, Makoto; Pan, Jianzhi; Obata, Yuichi; Hamada, Hirofumi; Yokoyama, Kazunari K.

    2005-01-01

    Adenoviruses are excellent vectors for gene transfer and are used extensively for high-level expression of the products of transgenes in living cells. The development of simple and rapid methods for the purification of stable infectious recombinant adenoviruses (rAds) remains a challenge. We report here a method for the purification of infectious adenovirus type 5 (Ad5) that involves ultracentrifugation on a cesium chloride gradient at 604,000g for 15 min at 4 deg C and tangential flow filtration. The entire procedure requires less than two hours and infectious Ad5 can be recovered at levels higher than 64% of the number of plaque-forming units (pfu) in the initial crude preparation of viruses. We have obtained titers of infectious purified Ad5 of 1.35 x 10 10 pfu/ml and a ratio of particle titer to infectious titer of seven. The method described here allows the rapid purification of rAds for studies of gene function in vivo and in vitro, as well as the rapid purification of Ad5

  9. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  10. Assessment of cardiac exposure in left-tangential breast irradiation

    International Nuclear Information System (INIS)

    Vees, H.; Bigler, R.; Gruber, G.; Bieri, S.

    2011-01-01

    Purpose. - To assess the value of treatment-planning related parameters namely, the breast volume; the distance of the inferior field border to diaphragm; and the cardio-thoracic ratio for left-tangential breast irradiation. Patients and methods. - Treatment plans of 27 consecutively left-sided breast cancer patients after breast conserving surgery were evaluated for several parameters concerning heart-irradiation. We measured the heart distance respective to the cardio-thoracic ratio and the distance of the inferior field border to diaphragm, as well as the breast volume in correlation with the irradiated heart volume. Results. - The mean heart and left breast volumes were 504 cm 3 and 672.8 cm 3 , respectively. The mean heart diameter was 13.4 cm; the mean cardio-thoracic ratio 0.51 and the mean distance of the inferior field border to diaphragm was 1.4 cm. Cardio-thoracic ratio (p = 0.01), breast volume (p = 0.0002), distance of the inferior field border to diaphragm (p = 0.02) and central lung distance (p = 0.02) were significantly correlated with the measured heart distance. A significant correlation was also found between cardio-thoracic ratio, breast volume and distance of the inferior field border to diaphragm with the irradiated heart volume measured by V10, V20 and V40. Conclusion. - The verification of parameters like cardio-thoracic ratio, distance of the inferior field border to diaphragm and breast volume in left-sided breast cancer patients may help in determining which patients could benefit from more complex planning techniques such as intensity-modulated radiotherapy to reduced risk of late cardiac injury. (authors)

  11. Operation of a tangential bolometer on the PBX tokamak

    International Nuclear Information System (INIS)

    Paul, S.F.; Fonck, R.J.; Schmidt, G.L.

    1987-04-01

    A compact 15-channel bolometer array that views plasma emission tangentially across the midplane has been installed on the PBX tokamak to supplement a 19-channel poloidal array which views the plasma perpendicular to the toroidal direction. By comparing measurements from these arrays, poloidal asymmetries in the emission profile can be assessed. The detector array consists of 15 discrete 2-mm x 2-mm Thinistors, a mixed semiconductor material whose temperature coefficient of resistance is relatively high. The accumulated heat incident on a detector gives rise to a change in the resistance in each active element. Operated in tandem with an identical blind detector, the resistance in each pair is compared in a Wheatstone bridge circuit. The variation in voltage resulting from the change in resistance is amplified, stored on a CAMAC transient recorder during the plasma discharge, and transferred to a VAX data acquisition computer. The instantaneous power is obtained by digitally smoothing and differentiating the signals in time, with suitable compensation for the cooling of the detector over the course of a plasma discharge. The detectors are ''free standing,'' i.e., they are supported only by their electrical leads. Having no substrate in contact with the detector reduces the response time and increases the time it takes for the detector to dissipate its accumulated heat, reducing the compensation for cooling required in the data analysis. The detectors were absolutely calibrated with a tungsten-halogen filament lamp and were found to vary by +-3%. The irradiance profiles are inverted to reveal the radially resolved emitted power density from the plasma, which is typically in the 0.1 to 0.5 W/cm 3 range

  12. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam; Shi, Yuexiang; Gao, Xin

    2014-01-01

    of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue

  13. Incorporating assumption deviation risk in quantitative risk assessments: A semi-quantitative approach

    International Nuclear Information System (INIS)

    Khorsandi, Jahon; Aven, Terje

    2017-01-01

    Quantitative risk assessments (QRAs) of complex engineering systems are based on numerous assumptions and expert judgments, as there is limited information available for supporting the analysis. In addition to sensitivity analyses, the concept of assumption deviation risk has been suggested as a means for explicitly considering the risk related to inaccuracies and deviations in the assumptions, which can significantly impact the results of the QRAs. However, challenges remain for its practical implementation, considering the number of assumptions and magnitude of deviations to be considered. This paper presents an approach for integrating an assumption deviation risk analysis as part of QRAs. The approach begins with identifying the safety objectives for which the QRA aims to support, and then identifies critical assumptions with respect to ensuring the objectives are met. Key issues addressed include the deviations required to violate the safety objectives, the uncertainties related to the occurrence of such events, and the strength of knowledge supporting the assessments. Three levels of assumptions are considered, which include assumptions related to the system's structural and operational characteristics, the effectiveness of the established barriers, as well as the consequence analysis process. The approach is illustrated for the case of an offshore installation. - Highlights: • An approach for assessing the risk of deviations in QRA assumptions is presented. • Critical deviations and uncertainties related to their occurrence are addressed. • The analysis promotes critical thinking about the foundation and results of QRAs. • The approach is illustrated for the case of an offshore installation.

  14. Emerging Assumptions About Organization Design, Knowledge And Action

    Directory of Open Access Journals (Sweden)

    Alan Meyer

    2013-12-01

    Full Text Available Participants in the Organizational Design Community’s 2013 Annual Conference faced the challenge of “making organization design knowledge actionable.”  This essay summarizes the opinions and insights participants shared during the conference.  I reflect on these ideas, connect them to recent scholarly thinking about organization design, and conclude that seeking to make design knowledge actionable is nudging the community away from an assumption set based upon linearity and equilibrium, and toward a new set of assumptions based on emergence, self-organization, and non-linearity.

  15. A tangentially viewing visible TV system for the DIII-D divertor

    International Nuclear Information System (INIS)

    Fenstermacher, M.E.; Meyer, W.H.; Wood, R.D.; Nilson, D.G.; Ellis, R.; Brooks, N.H.

    1997-01-01

    A video camera system has been installed on the DIII-D tokamak for two-dimensional spatial studies of line emission in the lower divertor region. The system views the divertor tangentially at approximately the height of the X point through an outer port. At the tangency plane, the entire divertor from the inner wall to outside the DIII-D bias ring is viewed with spatial resolution of ∼1 cm. The image contains information from ∼90 deg of toroidal angle. In a recent upgrade, remotely controllable filter changers were added which have produced images from nominally identical discharges using different spectral lines. Software was developed to calculate the response function matrix of the optical system using distributed computing techniques and assuming toroidal symmetry. Standard sparse matrix algorithms are then used to invert the three-dimensional images onto a poloidal plane. Spatial resolution of the inverted images is 2 cm; higher resolution simply increases the size of the response function matrix. Initial results from a series of experiments with multiple identical discharges show that the emission from CII and CIII, which appears along the inner scrape-off layer above and below the X point during ELMing H mode, moves outward and becomes localized near the X point in radiative divertor operation induced by deuterium injection. copyright 1997 American Institute of Physics

  16. Capturing Assumptions while Designing a Verification Model for Embedded Systems

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.

    A formal proof of a system correctness typically holds under a number of assumptions. Leaving them implicit raises the chance of using the system in a context that violates some assumptions, which in return may invalidate the correctness proof. The goal of this paper is to show how combining

  17. Seasonal variability and geostrophic circulation in the eastern Mediterranean as revealed through a repeated XBT transect

    Directory of Open Access Journals (Sweden)

    V. Zervakis

    Full Text Available The evolution of the upper thermocline on a section across the eastern Mediterranean was recorded bi-weekly through a series of XBT transects from Piraeus, Greece to Alexandria, Egypt, extending from October 1999 to October 2000 on board Voluntary Observing Ships in the framework of the Mediterranean Forecasting System Pilot Project. The data acquired provided valuable information on the seasonal variability of the upper ocean thermal structure at three different regions of the eastern Mediterranean: the Myrtoan, Cretan and Levantine Seas. Furthermore, the horizontal distance (~12 miles between successive profiles provides enough spatial resolution to analyze mesoscale features, while the temporal distance between successive expeditions (2–4 weeks allows us to study their evolution. Sub-basin scale features are identified using contemporaneous sea surface temperature satellite images. The cross-transect geostrophic velocity field and corresponding volume fluxes for several sub-basin scale features of the Levantine Sea are estimated by exploiting monthly q / S diagrams from operational runs of the Princeton Ocean Model in use at NCMR. A southwestward transport in the proximity of the southeast tip of Crete was estimated between 1–3 Sv. The transport increases after the winter formation of dense intermediate water in the Cretan Sea strengthens the pressure gradient across the Cretan Straits. The Mersah-Matruh anticyclone was identified as a closed gyre carrying about 2–6 Sv. This feature was stable throughout the stratified period and disappeared from our records in March 2000. Finally, our data reveal the existence of an eastward-flowing coastal current along the North African coast, transporting a minimum of 1–2 Sv.

    Key words. Oceanography: physical (eddies and mesoscale processes; currents; marginal and semi-closed seas

  18. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    Science.gov (United States)

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics.

  19. Critically Challenging Some Assumptions in HRD

    Science.gov (United States)

    O'Donnell, David; McGuire, David; Cross, Christine

    2006-01-01

    This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…

  20. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  1. The Arundel Assumption And Revision Of Some Large-Scale Maps ...

    African Journals Online (AJOL)

    The rather common practice of stating or using the Arundel Assumption without reference to appropriate mapping standards (except mention of its use for graphical plotting) is a major cause of inaccuracies in map revision. This paper describes an investigation to ascertain the applicability of the Assumption to the revision of ...

  2. Optimum Energy Extraction from Coherent Vortex Rings Passing Tangentially Over Flexible Plates

    Science.gov (United States)

    Pirnia, Alireza; Browning, Emily A.; Peterson, Sean D.; Erath, Byron D.

    2017-11-01

    Coherent vortical structures can incite self-sustained oscillations in flexible membranes. This concept has recently gained interest for energy extraction from ambient environments. In this study the special case of a vortex ring passing tangentially over a cantilevered flexible plate is investigated. This problem is governed by the Kirchhoff-Love plate equation, which can be expressed in terms of a non-dimensional mass parameter of the plate, non-dimensional pressure loading induced by the vortex ring, and a Strouhal (St) number which expresses the duration of pressure loading relative to the period of plate oscillation. For a plate with a fixed mass parameter immersed in a fluid environment, the St number specifies the beam dynamics and the energy exchange process. The aim of this study is to identify the St number corresponding to maximum energy exchange between plates and vortex rings. The energy exchange process between the vortex ring and the plate is investigated over a range of 0.3 transfer is reported in each case and an empirical correlation is provided for predictive purposes. Supported by the National Science Foundation (NSF) under Grant No. CBET-1511761, and the Natural Sciences and Engineering Research Council of Canada (NSERC), under Grant No. 05778-2015.

  3. Causal Mediation Analysis: Warning! Assumptions Ahead

    Science.gov (United States)

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  4. Late Pleistocene sequence architecture on the geostrophic current-dominated southwest margin of the Ulleung Basin, East Sea

    Science.gov (United States)

    Choi, Dong-Lim; Shin, Dong-Hyeok; Kum, Byung-Cheol; Jang, Seok; Cho, Jin-Hyung; Jou, Hyeong-Tae; Jang, Nam-Do

    2018-06-01

    High-resolution multichannel seismic data were collected to identify depositional sequences on the southwestern shelf of the Ulleung Basin, where a unidirectional ocean current is dominant at water depths exceeding 130 m. Four aggradational stratigraphic sequences with a 100,000-year cycle were recognized since marine isotope stage (MIS) 10. These sequences consist only of lowstand systems tracts (LSTs) and falling-stage systems tracts (FSSTs). Prograding wedge-shaped deposits are present in the LSTs near the shelf break. Oblique progradational clinoforms of forced regressive deposits are present in the FSSTs on the outer continental shelf. Each FSST has non-uniform forced regressional stratal geometries, reflecting that the origins of sediments in each depositional sequence changed when sea level was falling. Slump deposits are characteristically developed in the upper layer of the FSSTs, and this was used as evidence to distinguish the sequence boundaries. The subsidence rates around the shelf break reached as much as 0.6 mm/year since MIS 10, which contributed to the well-preserved depositional sequence. During the Quaternary sea-level change, the water depth in the Korea Strait declined and the intensity of the Tsushima Current flowing near the bottom of the inner continental shelf increased. This resulted in greater erosion of sediments that were delivered to the outer continental shelf, which was the main cause of sediment deposition on the deep, low-angled outer shelf. Therefore, a depositional sequence formation model that consists of only FSSTs and LSTs, excluding highstand systems tracts (HSTs) and transgressive systems tracts (TSTs), best explains the depositional sequence beneath this shelf margin dominated by a geostrophic current.

  5. High-resolution Tangential AXUV Arrays for Radiated Power Density Measurements on NSTX-U

    Energy Technology Data Exchange (ETDEWEB)

    Delgado-Aparicio, L [PPPL; Bell, R E [PPPL; Faust, I [MIT; Tritz, K [The Johns Hopkins University, Baltimore, MD, 21209, USA; Diallo, A [PPPL; Gerhardt, S P [PPPL; Kozub, T A [PPPL; LeBlanc, B P [PPPL; Stratton, B C [PPPL

    2014-07-01

    Precise measurements of the local radiated power density and total radiated power are a matter of the uttermost importance for understanding the onset of impurity-induced instabilities and the study of particle and heat transport. Accounting of power balance is also needed for the understanding the physics of various divertor con gurations for present and future high-power fusion devices. Poloidal asymmetries in the impurity density can result from high Mach numbers and can impact the assessment of their flux-surface-average and hence vary the estimates of P[sub]rad (r, t) and (Z[sub]eff); the latter is used in the calculation of the neoclassical conductivity and the interpretation of non-inductive and inductive current fractions. To this end, the bolometric diagnostic in NSTX-U will be upgraded, enhancing the midplane coverage and radial resolution with two tangential views, and adding a new set of poloidally-viewing arrays to measure the 2D radiation distribution. These systems are designed to contribute to the near- and long-term highest priority research goals for NSTX-U which will integrate non-inductive operation at reduced collisionality, with high-pressure, long energy-confinement-times and a divertor solution with metal walls.

  6. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    Science.gov (United States)

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  7. Unrealistic Assumptions in Economics: an Analysis under the Logic of Socioeconomic Processes

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2014-11-01

    Full Text Available The realism of assumptions is an ongoing debate within the philosophy of economics. One of the most referenced papers in this matter belongs to Milton Friedman. He defends the use of unrealistic assumptions, not only because of a pragmatic issue, but also the intrinsic difficulties of determining the extent of realism. On the other hand, realists have criticized (and still do today the use of unrealistic assumptions - such as the assumption of rational choice, perfect information, homogeneous goods, etc. However, they did not accompany their statements with a proper epistemological argument that supports their positions. In this work it is expected to show that the realism of (a particular sort of assumptions is clearly relevant when examining economic models, since the system under study (the real economies is not compatible with logic of invariance and of mechanisms, but with the logic of possibility trees. Because of this, models will not function as tools for predicting outcomes, but as representations of alternative scenarios, whose similarity to the real world will be examined in terms of the verisimilitude of a class of model assumptions

  8. Advantages of the technique with segmented fields for tangential breast irradiation

    International Nuclear Information System (INIS)

    Stefanovski, Zoran; Smichkoska, Snezhana; Petrova, Deva; Lazarova, Emilija

    2013-01-01

    In the case of breast cancer, the prominent role of radiation therapy is an established fact. Depending on the stage of the disease, the breast is most often irradiated with two tangential fields and a direct supraclavicular field. Planning target volume is defined through the recommendations in ICRU Reports 50 and 62. The basic ‘dogma’ of radiotherapy requires the dose in the target volume to be homogenous. The favorable situation would be if the dose width was between 95% and 107%; this, however, is often not possible to be fulfilled. A technique for enhancement of homogeneity of isodose distribution would be using one or more additional fields, which will increase the dose in the volume where it is too low. These fields are called segmented fields (a technique also known as ‘field in field’) because they occupy only part of the primary fields. In this study we will show the influence of this technique on the dose homogeneity improvement in the PTV region. The mean dose in the target volume was increased from 49.51 Gy to 50.79 Gy in favor of the plans with segmented fields; and the dose homogeneity (measured in standard deviations) was also improved - 1.69 vs. 1.30. The increase in the target volume, encompassed by 95% isodose, was chosen as a parameter to characterize overall planning improvement. Thus, in our case, the improvement of dose coverage was from 93.19% to 97.06%. (Author)

  9. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Science.gov (United States)

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  10. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Directory of Open Access Journals (Sweden)

    Anja F. Ernst

    2017-05-01

    Full Text Available Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  11. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    Science.gov (United States)

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  12. Evolution of Requirements and Assumptions for Future Exploration Missions

    Science.gov (United States)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  13. The value of setup portal films as an estimate of a patient's position throughout fractionated tangential breast irradiation: an on-line study

    International Nuclear Information System (INIS)

    McGee, Kiaran P.; Fein, Douglas A.; Hanlon, Alex L.; Schultheiss, Timothy E.; Fowble, Barbara L.

    1997-01-01

    Purpose: To determine if portal setup films are an accurate representation of a patient's position throughout the course of fractionated tangential breast irradiation. Methods and Materials: Thirteen patients undergoing external beam irradiation for T1-T2 infiltrating ductal carcinoma of the breast following excisional biopsy and axillary dissection were imaged using an on-line portal imaging device attached to a 6 MV linear accelerator. Medial and lateral tangential fields were imaged and a total of 139 fractions, 225 portal fields, and 4450 images were obtained. Interfractional and intrafractional variations for anatomical parameters including the central lung distance (CLD), central flash distance (CFD), and inferior central margin (ICM) were calculated from these images. A pooled estimate of the random error associated with a given treatment was determined by adding the interfractional and intrafractional standard deviations in quadrature. A 95% confidence level assigned a value of two standard deviations of the random error estimate. Central lung distance, CFD, and ICM distances were then measured for all portal setup films. Significant differences were defined as occurring when the simulation-setup difference was greater than the 95% confidence value. Results: Differences between setup portal and simulation films were less than their 95% confidence values in 70 instances indicating that in 90% of the time these differences are a result of random differences in daily treatment positioning. Conclusions: In 90% of cases tested, initial portal setup films are an accurate representation of a patients daily treatment setup

  14. Changing Assumptions and Progressive Change in Theories of Strategic Organization

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Hallberg, Niklas L.

    2017-01-01

    are often decoupled from the results of empirical testing, changes in assumptions seem closely intertwined with theoretical progress. Using the case of the resource-based view, we suggest that progressive change in theories of strategic organization may come about as a result of scholarly debate and dispute......A commonly held view is that strategic organization theories progress as a result of a Popperian process of bold conjectures and systematic refutations. However, our field also witnesses vibrant debates or disputes about the specific assumptions that our theories rely on, and although these debates...... over what constitutes proper assumptions—even in the absence of corroborating or falsifying empirical evidence. We also discuss how changing assumptions may drive future progress in the resource-based view....

  15. Frequency and Magnitude Analysis of the Macro-instability Related Component of the Tangential Force Affecting Radial Baffles in a Stirred Vessel

    Directory of Open Access Journals (Sweden)

    P. Hasal

    2002-01-01

    Full Text Available Experimental data obtained by measuring the tangential component of force affecting radial baffles in a flat-bottomed cylindrical mixing vessel stirred with pitched blade impellers is analysed. The maximum mean tangential force is detected at the vessel bottom. The mean force value increases somewhat with decreasing impeller off-bottom clearance and is noticeably affected by the number of impeller blades. Spectral analysis of the experimental data clearly demonstrated the presence of its macro-instability (MI related low-frequency component embedded in the total force at all values of impeller Reynolds number. The dimensionless frequency of the occurrence of the MI force component is independent of stirring speed, position along the baffle, number of impeller blades and liquid viscosity. Its mean value is about 0.074. The relative magnitude (QMI of the MI-related component of the total force is evaluated by a combination of proper orthogonal decomposition (POD and spectral analysis. Relative magnitude QMI was analysed in dependence on the frequency of the impeller revolution, the axial position of the measuring point in the vessel, the number of impeller blades, the impeller off-bottom clearance, and liquid viscosity. Higher values of QMI are observed at higher impeller off-bottom clearance height and (generally QMI decreases slightly with increasing impeller speed. The QMI value decreases in the direction from vessel bottom to liquid level. No evident difference was observed between 4 blade and 6 blade impellers. Liquid viscosity has only a marginal impact on the QMI value.

  16. Analytical and numerical calculation of magnetic field distribution in the slotted air-gap of tangential surface permanent-magnet motors

    Directory of Open Access Journals (Sweden)

    Boughrara Kamel

    2009-01-01

    Full Text Available This paper deals with the analytical and numerical analysis of the flux density distribution in the slotted air gap of permanent magnet motors with surface mounted tangentially magnetized permanent magnets. Two methods for magnetostatic field calculations are developed. The first one is an analytical method in which the effect of stator slots is taken into account by modulating the magnetic field distribution by the complex relative air gap permeance. The second one is a numerical method using 2-D finite element analysis with consideration of Dirichlet and anti-periodicity (periodicity boundary conditions and Lagrange Multipliers for simulation of movement. The results obtained by the analytical method are compared to the results of finite-element analysis.

  17. Investigating the Assumptions of Uses and Gratifications Research

    Science.gov (United States)

    Lometti, Guy E.; And Others

    1977-01-01

    Discusses a study designed to determine empirically the gratifications sought from communication channels and to test the assumption that individuals differentiate channels based on gratifications. (MH)

  18. Assessing framing assumptions in quantitative health impact assessments: a housing intervention example.

    Science.gov (United States)

    Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M

    2013-09-01

    Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. An automated portal verification system for the tangential breast portal field

    International Nuclear Information System (INIS)

    Yin, F.-F.; Lai, W.; Chen, C. W.; Nelson, D. F.

    1995-01-01

    Hough transform was used to detect and quantify the field edge. Then, the anatomical landmarks (skin line and chest wall) were extracted using both histogram equalization method and Canny edge detector in different subregions. The resulting parameters such as relative shift and rotation from the matching procedure were related to the patient setup variations and were used as the basis for the positioning correction suggestion. Results: The automated portal verification technique was tested using over 100 clinical tangential breast portal images. Both field widths and collimator angles were calculated and were compared to the machine setup parameters. The computer identified anatomical features were evaluated by an expert oncologist by comparing computer-identified edge lines to manual drawings. Results indicated that the computerized algorithm was able to detect the setup field size within an error less than 1.5 mm and collimator angle with an error less than one degree compared to the original field setup. Note that these are the tolerances of the treatment machine. The radiation oncologist rated computer extracted features as absolutely acceptable except 10% of chest walls were acceptable. The subjective evaluation indicated that the computer-identified features was reliable enough for potential clinical applications. The Chamfer matching method provided a matching results between features in two images in an accuracy of within 2 mm. Conclusions: A fully automated portal verification system is developed for the radiation therapy of breast cancer. With a newly developed hierarchical region feature processing and a feature-weighted Chamfer matching techniques, the treatment port for the tangential breast port can be automatically verified. The technique we developed can be also used for the development of automated portal verification systems for the other treatment sites. Our preliminary results showed potentials for the clinical applications

  20. The Emperors sham - wrong assumption that sham needling is sham.

    Science.gov (United States)

    Lundeberg, Thomas; Lund, Iréne; Näslund, Jan; Thomas, Moolamanil

    2008-12-01

    During the last five years a large number of randomised controlled clinical trials (RCTs) have been published on the efficacy of acupuncture in different conditions. In most of these studies verum is compared with sham acupuncture. In general both verum and sham have been found to be effective, and often with little reported difference in outcome. This has repeatedly led to the conclusion that acupuncture is no more effective than placebo treatment. However, this conclusion is based on the assumption that sham acupuncture is inert. Since sham acupuncture evidently is merely another form of acupuncture from the physiological perspective, the assumption that sham is sham is incorrect and conclusions based on this assumption are therefore invalid. Clinical guidelines based on such conclusions may therefore exclude suffering patients from valuable treatments.

  1. Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.

    Science.gov (United States)

    Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M

    2017-02-27

    RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. The homogeneous marginal utility of income assumption

    NARCIS (Netherlands)

    Demuynck, T.

    2015-01-01

    We develop a test to verify if every agent from a population of heterogeneous consumers has the same marginal utility of income function. This homogeneous marginal utility of income assumption is often (implicitly) used in applied demand studies because it has nice aggregation properties and

  3. Recognising the Effects of Costing Assumptions in Educational Business Simulation Games

    Science.gov (United States)

    Eckardt, Gordon; Selen, Willem; Wynder, Monte

    2015-01-01

    Business simulations are a powerful way to provide experiential learning that is focussed, controlled, and concentrated. Inherent in any simulation, however, are numerous assumptions that determine feedback, and hence the lessons learnt. In this conceptual paper we describe some common cost assumptions that are implicit in simulation design and…

  4. Tangential Volumetric Modulated Radiotherapy - A New Technique for Large Scalp Lesions with a Case Study in Lentigo Maligna

    Directory of Open Access Journals (Sweden)

    E. Daniel Santos

    2015-06-01

    Full Text Available Introduction: Dose homogeneity within and dose conformity to the target volume can be a challenge to achieve when treating large area scalp lesions. Traditionally High Dose Rate (HDR brachytherapy (BT scalp moulds have been considered the ultimate conformal therapy. We have developed a new technique, Tangential Volumetric Modulated Arc Therapy (TVMAT that treats with the beam tangential to the surface of the scalp. In the TVMAT plan the collimating jaws protect dose-sensitive tissue in close proximity to the planning target volume (PTV. Not all the PTV is within the beam aperture as defined by the jaws during all the beam-on time. We report the successful treatment of one patient. Methods: A patient with biopsy proven extensive lentigo maligna on the scalp was simulated and three plans were created; one using a HDR brachytherapy surface mould, another using a conventional VMAT technique and a third using our new TVMAT technique. The patient was prescribed 55 Gy in 25 fractions. Plans were optimised so that PTV V100% = 100%. Plans were compared using Dose-Value Histogram (DVH analysis, and homogeneity and conformity indices. Results: BT, VMAT and TVMAT PTV median coverage was 105.51%, 103.46% and 103.62%, with homogeneity index of 0.33, 0.07 and 0.07 and the conformity index of 0.30, 0.69 and 0.83 respectively. The median dose to the left hippocampus was 11.8 Gy, 9.0 Gy and 0.6 Gy and the median dose to the right hippocampus was 12.6 Gy, 9.4 Gy and 0.7 Gy for the BT, VMAT and TVMAT respectively. Overall TVMAT delivered the least doses to the surrounding organs, BT delivered the highest. Conclusions: TVMAT was superior to VMAT which was in turn superior to BT in PTV coverage, conformity and homogeneity and delivery of dose to the surrounding organs at risk. The patient was successfully treated to full dose with TVMAT. TVMAT was verified as being the best amongst the three techniques in a second patient.

  5. Comparative Interpretation of Classical and Keynesian Fiscal Policies (Assumptions, Principles and Primary Opinions

    Directory of Open Access Journals (Sweden)

    Engin Oner

    2015-06-01

    Full Text Available Adam Smith being its founder, in the Classical School, which gives prominence to supply and adopts an approach of unbiased finance, the economy is always in a state of full employment equilibrium. In this system of thought, the main philosophy of which is budget balance, that asserts that there is flexibility between prices and wages and regards public debt as an extraordinary instrument, the interference of the state with the economic and social life is frowned upon. In line with the views of the classical thought, the classical fiscal policy is based on three basic assumptions. These are the "Consumer State Assumption", the assumption accepting that "Public Expenditures are Always Ineffectual" and the assumption concerning the "Impartiality of the Taxes and Expenditure Policies Implemented by the State". On the other hand, the Keynesian School founded by John Maynard Keynes, gives prominence to demand, adopts the approach of functional finance, and asserts that cases of underemployment equilibrium and over-employment equilibrium exist in the economy as well as the full employment equilibrium, that problems cannot be solved through the invisible hand, that prices and wages are strict, the interference of the state is essential and at this point fiscal policies have to be utilized effectively.Keynesian fiscal policy depends on three primary assumptions. These are the assumption of "Filter State", the assumption that "public expenditures are sometimes effective and sometimes ineffective or neutral" and the assumption that "the tax, debt and expenditure policies of the state can never be impartial". 

  6. Determining Bounds on Assumption Errors in Operational Analysis

    Directory of Open Access Journals (Sweden)

    Neal M. Bengtson

    2014-01-01

    Full Text Available The technique of operational analysis (OA is used in the study of systems performance, mainly for estimating mean values of various measures of interest, such as, number of jobs at a device and response times. The basic principles of operational analysis allow errors in assumptions to be quantified over a time period. The assumptions which are used to derive the operational analysis relationships are studied. Using Karush-Kuhn-Tucker (KKT conditions bounds on error measures of these OA relationships are found. Examples of these bounds are used for representative performance measures to show limits on the difference between true performance values and those estimated by operational analysis relationships. A technique for finding tolerance limits on the bounds is demonstrated with a simulation example.

  7. The Importance of the Assumption of Uncorrelated Errors in Psychometric Theory

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos

    2015-01-01

    A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…

  8. Formalization and Analysis of Reasoning by Assumption

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2006-01-01

    This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning

  9. Psychopatholgy, fundamental assumptions and CD-4 T lymphocyte ...

    African Journals Online (AJOL)

    In addition, we explored whether psychopathology and negative fundamental assumptions in ... Method: Self-rating questionnaires to assess depressive symptoms, ... associated with all participants scoring in the positive range of the FA scale.

  10. Dual role for DOCK7 in tangential migration of interneuron precursors in the postnatal forebrain.

    Science.gov (United States)

    Nakamuta, Shinichi; Yang, Yu-Ting; Wang, Chia-Lin; Gallo, Nicholas B; Yu, Jia-Ray; Tai, Yilin; Van Aelst, Linda

    2017-12-04

    Throughout life, stem cells in the ventricular-subventricular zone generate neuroblasts that migrate via the rostral migratory stream (RMS) to the olfactory bulb, where they differentiate into local interneurons. Although progress has been made toward identifying extracellular factors that guide the migration of these cells, little is known about the intracellular mechanisms that govern the dynamic reshaping of the neuroblasts' morphology required for their migration along the RMS. In this study, we identify DOCK7, a member of the DOCK180-family, as a molecule essential for tangential neuroblast migration in the postnatal mouse forebrain. DOCK7 regulates the migration of these cells by controlling both leading process (LP) extension and somal translocation via distinct pathways. It controls LP stability/growth via a Rac-dependent pathway, likely by modulating microtubule networks while also regulating F-actin remodeling at the cell rear to promote somal translocation via a previously unrecognized myosin phosphatase-RhoA-interacting protein-dependent pathway. The coordinated action of both pathways is required to ensure efficient neuroblast migration along the RMS. © 2017 Nakamuta et al.

  11. Development and verification of an excel program for calculation of monitor units for tangential breast irradiation with external photon beams

    International Nuclear Information System (INIS)

    Woldemariyam, M.G.

    2015-07-01

    The accuracy of MU calculation performed with Prowess Panther TPS (for Co-60) and Oncentra (for 6MV and 15MV x-rays) for tangential breast irradiation was evaluated with measurements made in an anthropomorphic phantom using calibrated Gafchromic EBT2 films. Excel programme which takes in to account external body surface irregularity of an intact breast or chest wall (hence absence of full scatter condition) using Clarkson’s sector summation technique was developed. A single surface contour of the patient obtained in a transverse plane containing the MU calculation point was required for effective implementation of the programme. The outputs of the Excel programme were validated with the respective outputs from the 3D treatment planning systems. The variations between the measured point doses and their calculated counterparts by the TPSs were within the range of -4.74% to 4.52% (mean of -1.33% and SD of 2.69) for the prowess panther TPS and -4.42% to 3.14% (mean of -1.47% and SD of -3.95) for the Oncentra TPS. The observed degree of deviation may be attributed to limitations of the dose calculation algorithm within the TPSs, set up inaccuracies of the phantom during irradiation and inherent uncertainties associated with radiochromic film dosimetry. The percentage deviations between MUs calculated with the two TPSs and the Excel program were within the range of -3.45% and 3.82% (mean of 0.83% and SD of 2.25). The observed percentage deviations are within the 4% action level recommended by TG-114. This indicates that the Excel program can be confidently employed for calculation of MUs for 2D planned tangential breast irradiations or to independently verify MUs calculated with another calculation methods. (au)

  12. Shattering Man’s Fundamental Assumptions in Don DeLillo’s Falling Man

    Directory of Open Access Journals (Sweden)

    Hazim Adnan Hashim

    2016-09-01

    Full Text Available The present study addresses effects of traumatic events such as the September 11 attacks on victims’ fundamental assumptions. These beliefs or assumptions provide individuals with expectations about the world and their sense of self-worth. Thus, they ground people’s sense of security, stability, and orientation. The September 11 terrorist attacks in the U.S.A. were very tragic for Americans because this fundamentally changed their understandings about many aspects in life. The attacks led many individuals to build new kind of beliefs and assumptions about themselves and the world. Many writers have written about the human ordeals that followed this incident. Don DeLillo’s Falling Man reflects the traumatic repercussions of this disaster on Americans’ fundamental assumptions. The objective of this study is to examine the novel from the traumatic perspective that has afflicted the victims’ fundamental understandings of the world and the self. Individuals’ fundamental understandings could be changed or modified due to exposure to certain types of events like war, terrorism, political violence or even the sense of alienation. The Assumptive World theory of Ronnie Janoff-Bulman will be used as a framework to study the traumatic experience of the characters in Falling Man. The significance of the study lies in providing a new perception to the field of trauma that can help trauma victims to adopt alternative assumptions or reshape their previous ones to heal from traumatic effects.

  13. Extracurricular Business Planning Competitions: Challenging the Assumptions

    Science.gov (United States)

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  14. The Role of Policy Assumptions in Validating High-stakes Testing Programs.

    Science.gov (United States)

    Kane, Michael

    L. Cronbach has made the point that for validity arguments to be convincing to diverse audiences, they need to be based on assumptions that are credible to these audiences. The interpretations and uses of high stakes test scores rely on a number of policy assumptions about what should be taught in schools, and more specifically, about the content…

  15. Modeling Bottom Sediment Erosion Process by Swirling the Flow by Tangential Supply of Oil in the Tank

    Science.gov (United States)

    Nekrasov, V. O.

    2016-10-01

    The article carries out a statistical data processing of quantitative and territorial division of oil tanks operating in Tyumen region, intended for reception, storage and distribution of commercial oil through trunk pipelines. It describes the working principle of the new device of erosion and prevention of oil bottom sediment formation with tangential supply of oil pumped into reservoir. The most significant similarity criteria can be emphasized in modeling rotational flows exerting significant influence on the structure of the circulating flow of oil in tank when operation of the device described. The dependence of the distribution of the linear velocity of a point on the surface along the radius at the circular motion of the oil in the tank is characterized, and on the basis of this dependence, a formula of general kinetic energy of rotational motion of oil and asphalt-resin-paraffin deposits total volume in the oil reservoir is given.

  16. How to Handle Assumptions in Synthesis

    Directory of Open Access Journals (Sweden)

    Roderick Bloem

    2014-07-01

    Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.

  17. The incompressibility assumption in computational simulations of nasal airflow.

    Science.gov (United States)

    Cal, Ismael R; Cercos-Pita, Jose Luis; Duque, Daniel

    2017-06-01

    Most of the computational works on nasal airflow up to date have assumed incompressibility, given the low Mach number of these flows. However, for high temperature gradients, the incompressibility assumption could lead to a loss of accuracy, due to the temperature dependence of air density and viscosity. In this article we aim to shed some light on the influence of this assumption in a model of calm breathing in an Asian nasal cavity, by solving the fluid flow equations in compressible and incompressible formulation for different ambient air temperatures using the OpenFOAM package. At low flow rates and warm climatological conditions, similar results were obtained from both approaches, showing that density variations need not be taken into account to obtain a good prediction of all flow features, at least for usual breathing conditions. This agrees with most of the simulations previously reported, at least as far as the incompressibility assumption is concerned. However, parameters like nasal resistance and wall shear stress distribution differ for air temperatures below [Formula: see text]C approximately. Therefore, density variations should be considered for simulations at such low temperatures.

  18. The design of a second harmonic tangential array interferometer for C-Mod

    International Nuclear Information System (INIS)

    Bretz, N.; Jobes, F.; Irby, J.

    1997-01-01

    A design for a tangential array interferometer for C-Mod operating at 1.06 and 0.53 μm is presented. This is a special type of two color interferometer in which a Nd:YAG laser is frequency doubled in a nonlinear crystal. Because the doubling efficiency is imperfect, two frequencies propagate collinearly through the plasma after which the 1.06 μm ray is doubled again mixing in the optical domain with the undoubled ray. The resulting interference is insensitive to path length but is affected by plasma dispersion in the usual way. A typical central fringe shift in C-Mod is expected to be 0.1 endash 1.0, but the absolute and relative accuracy in n e l measurements can be as high as in a conventional interferometer. This design uses a repetitively pulsed laser which is converted to a fan beam crossing the horizontal midplane. The chordal array is defined by internal retroreflectors on the C-Mod midplane which return the beam to the second doubler and a detector array. This interferometer design has beam diameters of a few millimeters and element spacings of a few centimeters, uses a repetitively pulsed, TEM 00 Nd:YAG laser, fiber optic beam transport, commercial components, and a compact optical design which minimizes port space requirements. An optical system design is presented which is based on the performance of a tabletop prototype at Princeton Plasma Physics Laboratory. copyright 1997 American Institute of Physics

  19. Impacts of cloud overlap assumptions on radiative budgets and heating fields in convective regions

    Science.gov (United States)

    Wang, XiaoCong; Liu, YiMin; Bao, Qing

    2016-01-01

    Impacts of cloud overlap assumptions on radiative budgets and heating fields are explored with the aid of a cloud-resolving model (CRM), which provided cloud geometry as well as cloud micro and macro properties. Large-scale forcing data to drive the CRM are from TRMM Kwajalein Experiment and the Global Atmospheric Research Program's Atlantic Tropical Experiment field campaigns during which abundant convective systems were observed. The investigated overlap assumptions include those that were traditional and widely used in the past and the one that was recently addressed by Hogan and Illingworth (2000), in which the vertically projected cloud fraction is expressed by a linear combination of maximum and random overlap, with the weighting coefficient depending on the so-called decorrelation length Lcf. Results show that both shortwave and longwave cloud radiative forcings (SWCF/LWCF) are significantly underestimated under maximum (MO) and maximum-random (MRO) overlap assumptions, whereas remarkably overestimated under the random overlap (RO) assumption in comparison with that using CRM inherent cloud geometry. These biases can reach as high as 100 Wm- 2 for SWCF and 60 Wm- 2 for LWCF. By its very nature, the general overlap (GenO) assumption exhibits an encouraging performance on both SWCF and LWCF simulations, with the biases almost reduced by 3-fold compared with traditional overlap assumptions. The superiority of GenO assumption is also manifested in the simulation of shortwave and longwave radiative heating fields, which are either significantly overestimated or underestimated under traditional overlap assumptions. The study also pointed out the deficiency of constant assumption on Lcf in GenO assumption. Further examinations indicate that the CRM diagnostic Lcf varies among different cloud types and tends to be stratified in the vertical. The new parameterization that takes into account variation of Lcf in the vertical well reproduces such a relationship and

  20. Forecasting Value-at-Risk under Different Distributional Assumptions

    Directory of Open Access Journals (Sweden)

    Manuela Braione

    2016-01-01

    Full Text Available Financial asset returns are known to be conditionally heteroskedastic and generally non-normally distributed, fat-tailed and often skewed. These features must be taken into account to produce accurate forecasts of Value-at-Risk (VaR. We provide a comprehensive look at the problem by considering the impact that different distributional assumptions have on the accuracy of both univariate and multivariate GARCH models in out-of-sample VaR prediction. The set of analyzed distributions comprises the normal, Student, Multivariate Exponential Power and their corresponding skewed counterparts. The accuracy of the VaR forecasts is assessed by implementing standard statistical backtesting procedures used to rank the different specifications. The results show the importance of allowing for heavy-tails and skewness in the distributional assumption with the skew-Student outperforming the others across all tests and confidence levels.

  1. False assumptions.

    Science.gov (United States)

    Swaminathan, M

    1997-01-01

    Indian women do not have to be told the benefits of breast feeding or "rescued from the clutches of wicked multinational companies" by international agencies. There is no proof that breast feeding has declined in India; in fact, a 1987 survey revealed that 98% of Indian women breast feed. Efforts to promote breast feeding among the middle classes rely on such initiatives as the "baby friendly" hospital where breast feeding is promoted immediately after birth. This ignores the 76% of Indian women who give birth at home. Blaming this unproved decline in breast feeding on multinational companies distracts attention from more far-reaching and intractable effects of social change. While the Infant Milk Substitutes Act is helpful, it also deflects attention from more pressing issues. Another false assumption is that Indian women are abandoning breast feeding to comply with the demands of employment, but research indicates that most women give up employment for breast feeding, despite the economic cost to their families. Women also seek work in the informal sector to secure the flexibility to meet their child care responsibilities. Instead of being concerned about "teaching" women what they already know about the benefits of breast feeding, efforts should be made to remove the constraints women face as a result of their multiple roles and to empower them with the support of families, governmental policies and legislation, employers, health professionals, and the media.

  2. A generalized 2-D Poincaré inequality

    Directory of Open Access Journals (Sweden)

    Crisciani Fulvio

    2000-01-01

    Full Text Available Two 1-D Poincaré-like inequalities are proved under the mild assumption that the integrand function is zero at just one point. These results are used to derive a 2-D generalized Poincare inequality in which the integrand function is zero on a suitable arc contained in the domain (instead of the whole boundary. As an application, it is shown that a set of boundary conditions for the quasi geostrophic equation of order four are compatible with general physical constraints dictated by the dissipation of kinetic energy.

  3. 7 CFR 1980.476 - Transfer and assumptions.

    Science.gov (United States)

    2010-01-01

    ...-354 449-30 to recover its pro rata share of the actual loss at that time. In completing Form FmHA or... the lender on liquidations and property management. A. The State Director may approve all transfer and... Director will notify the Finance Office of all approved transfer and assumption cases on Form FmHA or its...

  4. Total Organic Carbon Distribution and Bacterial Cycling Across A Geostrophic Front In Mediterranean Sea. Implications For The Western Basin Carbon Cycle

    Science.gov (United States)

    Sempere, R.; van Wambeke, F.; Bianchi, M.; Dafner, E.; Lefevre, D.; Bruyant, F.; Prieur, L.

    We investigated the dynamic of the total organic carbon (TOC) pool and the role it played in the carbon cycle during winter 1997-1998 in the Almeria-Oran jet-front (AOF) system resulting from the spreading of Atlantic surface water through the Gibraltar Strait in the Alboran Sea (Southwestern Mediterranean Sea). We determined TOC by using high temperature combustion technique (HTC) and bacterial produc- tion (BP; via [3H] leucine incorporation) during two legs in the frontal area. We also estimated labile TOC (l-TOC) and bacterial growth efficiency (BGE) by performing TOC biodegradation experiments on board during the cruise whereas water column semi-labile (sl-TOC), and refractory-TOC were determined from TOC profile exami- nation. These results are discussed in relation with current velocity measured by using accoustic doppler current profiler (ADCP). Lowest TOC stocks (6330-6853 mmol C m-2) over 0-100 m were measured in the northern side of the geostrophic Jet which is also the highest dynamic area (horizontal speed of 80 cm s-1 in the first 100 m di- rected eastward). Our results indicated variable turnover times of sl-TOC across the Jet-Front system, which might be explained by different coupling of primary produc- tion and bacterial production observed in these areas. We also estimated TOC and sl-TOC transports within the Jet core off the Alboran Sea as well as potential CO2 production through bacterial respiration produced from sl-TOC assimilation by het- erotrophic bacteria.

  5. Detecting tangential dislocations on planar faults from traction free surface observations

    International Nuclear Information System (INIS)

    Ionescu, Ioan R; Volkov, Darko

    2009-01-01

    We propose in this paper robust reconstruction methods for tangential dislocations on planar faults. We assume that only surface observations are available, and that a traction free condition applies at that surface. This study is an extension to the full three dimensions of Ionescu and Volkov (2006 Inverse Problems 22 2103). We also explore in this present paper the possibility of detecting slow slip events (such as silent earthquakes, or earthquake nucleation phases) from GPS observations. Our study uses extensively an asymptotic estimate for the observed surface displacement. This estimate is first used to derive what we call the moments reconstruction method. Then it is also used for finding necessary conditions for a surface displacement field to have been caused by a slip on a fault. These conditions lead to the introduction of two parameters: the activation factor and the confidence index. They can be computed from the surface observations in a robust fashion. They indicate whether a measured displacement field is due to an active fault. We also infer a second, combined, reconstruction technique blending least square minimization and the moments method. We carefully assess how our reconstruction method is affected by the sensitivity of the observation apparatus and the stepsize for the grid of surface observation points. The maximum permissible stepsize for such a grid is computed for different values of fault depth and orientation. Finally we present numerical examples of reconstruction of faults. We demonstrate that our combined method is sharp, robust and computationally inexpensive. We also note that this method performs satisfactorily for shallow faults, despite the fact that our asymptotic formula deteriorates in that case

  6. Interface Input/Output Automata: Splitting Assumptions from Guarantees

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Nyman, Ulrik; Wasowski, Andrzej

    2006-01-01

    's \\IOAs [11], relying on a context dependent notion of refinement based on relativized language inclusion. There are two main contributions of the work. First, we explicitly separate assumptions from guarantees, increasing the modeling power of the specification language and demonstrating an interesting...

  7. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    Science.gov (United States)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  8. Modelling sexual transmission of HIV: testing the assumptions, validating the predictions

    Science.gov (United States)

    Baggaley, Rebecca F.; Fraser, Christophe

    2010-01-01

    Purpose of review To discuss the role of mathematical models of sexual transmission of HIV: the methods used and their impact. Recent findings We use mathematical modelling of “universal test and treat” as a case study to illustrate wider issues relevant to all modelling of sexual HIV transmission. Summary Mathematical models are used extensively in HIV epidemiology to deduce the logical conclusions arising from one or more sets of assumptions. Simple models lead to broad qualitative understanding, while complex models can encode more realistic assumptions and thus be used for predictive or operational purposes. An overreliance on model analysis where assumptions are untested and input parameters cannot be estimated should be avoided. Simple models providing bold assertions have provided compelling arguments in recent public health policy, but may not adequately reflect the uncertainty inherent in the analysis. PMID:20543600

  9. The crux of the method: assumptions in ordinary least squares and logistic regression.

    Science.gov (United States)

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  10. Efficient pseudorandom generators based on the DDH assumption

    NARCIS (Netherlands)

    Rezaeian Farashahi, R.; Schoenmakers, B.; Sidorenko, A.; Okamoto, T.; Wang, X.

    2007-01-01

    A family of pseudorandom generators based on the decisional Diffie-Hellman assumption is proposed. The new construction is a modified and generalized version of the Dual Elliptic Curve generator proposed by Barker and Kelsey. Although the original Dual Elliptic Curve generator is shown to be

  11. Contemporary assumptions on human nature and work and approach to human potential managing

    Directory of Open Access Journals (Sweden)

    Vujić Dobrila

    2006-01-01

    Full Text Available A general problem of this research is to identify if there is a relationship between the assumption on human nature and work (Mcgregor, Argyris, Schein, Steers and Porter and a general organizational model preference, as well as a mechanism of human resource management? This research was carried out in 2005/2006. The sample consisted of 317 subjects (197 managers, 105 highly educated subordinates and 15 entrepreneurs in 7 big enterprises in a group of small business enterprises differentiating in terms of the entrepreneur’s structure and a type of activity. A general hypothesis "that assumptions on human nature and work are statistically significant in connection to the preference approach (models, of work motivation commitment", has been confirmed. A specific hypothesis have been also confirmed: ·The assumptions on a human as a rational economic being are statistically significant in correlation with only two mechanisms of traditional models, the mechanism of method work control and the working discipline mechanism. ·Statistically significant assumptions on a human as a social being are correlated with all mechanisms of engaging employees, which belong to the model of the human relations, except the mechanism introducing the adequate type of prizes for all employees independently of working results. ·The assumptions on a human as a creative being are statistically significant, positively correlating with preference of two mechanisms belonging to the human resource model by investing into education and training and making conditions for the application of knowledge and skills. The young with assumptions on a human as a creative being prefer much broader repertoire of mechanisms belonging to the human resources model from the remaining category of subjects in the pattern. The connection between the assumption on human nature and preference models of engaging appears especially in the sub-pattern of managers, in the category of young subjects

  12. Limiting assumptions in molecular modeling: electrostatics.

    Science.gov (United States)

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  13. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    Science.gov (United States)

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.

    Science.gov (United States)

    Susan J. Alexander

    1991-01-01

    The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...

  15. Validity of the mockwitness paradigm: testing the assumptions.

    Science.gov (United States)

    McQuiston, Dawn E; Malpass, Roy S

    2002-08-01

    Mockwitness identifications are used to provide a quantitative measure of lineup fairness. Some theoretical and practical assumptions of this paradigm have not been studied in terms of mockwitnesses' decision processes and procedural variation (e.g., instructions, lineup presentation method), and the current experiment was conducted to empirically evaluate these assumptions. Four hundred and eighty mockwitnesses were given physical information about a culprit, received 1 of 4 variations of lineup instructions, and were asked to identify the culprit from either a fair or unfair sequential lineup containing 1 of 2 targets. Lineup bias estimates varied as a result of lineup fairness and the target presented. Mockwitnesses generally reported that the target's physical description was their main source of identifying information. Our findings support the use of mockwitness identifications as a useful technique for sequential lineup evaluation, but only for mockwitnesses who selected only 1 lineup member. Recommendations for the use of this evaluation procedure are discussed.

  16. Educational Technology as a Subversive Activity: Questioning Assumptions Related to Teaching and Leading with Technology

    Science.gov (United States)

    Kruger-Ross, Matthew J.; Holcomb, Lori B.

    2012-01-01

    The use of educational technologies is grounded in the assumptions of teachers, learners, and administrators. Assumptions are choices that structure our understandings and help us make meaning. Current advances in Web 2.0 and social media technologies challenge our assumptions about teaching and learning. The intersection of technology and…

  17. Child Development Knowledge and Teacher Preparation: Confronting Assumptions.

    Science.gov (United States)

    Katz, Lilian G.

    This paper questions the widely held assumption that acquiring knowledge of child development is an essential part of teacher preparation and teaching competence, especially among teachers of young children. After discussing the influence of culture, parenting style, and teaching style on developmental expectations and outcomes, the paper asserts…

  18. Analyzing Lagrange gauge measurements of spherical, cylindrical, or plane waves

    International Nuclear Information System (INIS)

    Aidun, J.B.

    1993-01-01

    Material response characterizations that are very useful in constitutive model development can be obtained from careful analysis of in-material (embedded, Lagrangian) gauge measurements of stress and/or particle velocity histories at multiple locations. The requisite measurements and the analysis are feasible for both laboratory and field experiments. The final product of the analysis is a set of load paths (e.g., radial stress vs. radial strain, tangential vs. radial stress, tangential vs. radial strain, radial stress vs. particle velocity) and their possible variation with propagation distance. Material model development can be guided and constrained by this information, but extra information or assumptions are needed to first establish a parameterized representation of the material response

  19. The sufficiency assumption of the reasoned approach to action

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2015-12-01

    Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.

  20. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    Science.gov (United States)

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  1. Making Predictions about Chemical Reactivity: Assumptions and Heuristics

    Science.gov (United States)

    Maeyer, Jenine; Talanquer, Vicente

    2013-01-01

    Diverse implicit cognitive elements seem to support but also constrain reasoning in different domains. Many of these cognitive constraints can be thought of as either implicit assumptions about the nature of things or reasoning heuristics for decision-making. In this study we applied this framework to investigate college students' understanding of…

  2. Testing Our Fundamental Assumptions

    Science.gov (United States)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  3. Dialogic or Dialectic? The Significance of Ontological Assumptions in Research on Educational Dialogue

    Science.gov (United States)

    Wegerif, Rupert

    2008-01-01

    This article explores the relationship between ontological assumptions and studies of educational dialogue through a focus on Bakhtin's "dialogic". The term dialogic is frequently appropriated to a modernist framework of assumptions, in particular the neo-Vygotskian or sociocultural tradition. However, Vygotsky's theory of education is dialectic,…

  4. Supporting calculations and assumptions for use in WESF safetyanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Hey, B.E.

    1997-03-07

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  5. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    ) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...

  6. Detecting and accounting for violations of the constancy assumption in non-inferiority clinical trials.

    Science.gov (United States)

    Koopmeiners, Joseph S; Hobbs, Brian P

    2018-05-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.

  7. Towards New Probabilistic Assumptions in Business Intelligence

    OpenAIRE

    Schumann Andrew; Szelc Andrzej

    2015-01-01

    One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot ...

  8. Assumption-versus data-based approaches to summarizing species' ranges.

    Science.gov (United States)

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2018-06-01

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  9. Halo-Independent Direct Detection Analyses Without Mass Assumptions

    CERN Document Server

    Anderson, Adam J.; Kahn, Yonatan; McCullough, Matthew

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the $m_\\chi-\\sigma_n$ plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the $v_{min}-\\tilde{g}$ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from $v_{min}$ to nuclear recoil momentum ($p_R$), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call $\\tilde{h}(p_R)$. The entire family of conventional halo-independent $\\tilde{g}(v_{min})$ plots for all DM masses are directly found from the single $\\tilde{h}(p_R)$ plot through a simple re...

  10. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Heterosexual assumptions in verbal and non-verbal communication in nursing.

    Science.gov (United States)

    Röndahl, Gerd; Innala, Sune; Carlsson, Marianne

    2006-11-01

    This paper reports a study of what lesbian women and gay men had to say, as patients and as partners, about their experiences of nursing in hospital care, and what they regarded as important to communicate about homosexuality and nursing. The social life of heterosexual cultures is based on the assumption that all people are heterosexual, thereby making homosexuality socially invisible. Nurses may assume that all patients and significant others are heterosexual, and these heteronormative assumptions may lead to poor communication that affects nursing quality by leading nurses to ask the wrong questions and make incorrect judgements. A qualitative interview study was carried out in the spring of 2004. Seventeen women and 10 men ranging in age from 23 to 65 years from different parts of Sweden participated. They described 46 experiences as patients and 31 as partners. Heteronormativity was communicated in waiting rooms, in patient documents and when registering for admission, and nursing staff sometimes showed perplexity when an informant deviated from this heteronormative assumption. Informants had often met nursing staff who showed fear of behaving incorrectly, which could lead to a sense of insecurity, thereby impeding further communication. As partners of gay patients, informants felt that they had to deal with heterosexual assumptions more than they did when they were patients, and the consequences were feelings of not being accepted as a 'true' relative, of exclusion and neglect. Almost all participants offered recommendations about how nursing staff could facilitate communication. Heterosexual norms communicated unconsciously by nursing staff contribute to ambivalent attitudes and feelings of insecurity that prevent communication and easily lead to misconceptions. Educational and management interventions, as well as increased communication, could make gay people more visible and thereby encourage openness and awareness by hospital staff of the norms that they

  12. Observing gravitational-wave transient GW150914 with minimal assumptions

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwa, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. C.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brocki, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderon Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chatterji, S.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. E.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, M.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, A.L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; DeRosa, R. T.; Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M.G.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. R.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritsche, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; de Haas, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinder, I.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijhunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Laguna, P.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, R.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mende, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Page, J.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prolchorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, P.S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shithriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlhruch, H.; Vajente, G.; Valdes, G.; Van Bakel, N.; Van Beuzekom, Martin; Van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, R. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-01-01

    The gravitational-wave signal GW150914 was first identified on September 14, 2015, by searches for short-duration gravitational-wave transients. These searches identify time-correlated transients in multiple detectors with minimal assumptions about the signal morphology, allowing them to be

  13. Oil price assumptions in macroeconomic forecasts: should we follow future market expectations?

    International Nuclear Information System (INIS)

    Coimbra, C.; Esteves, P.S.

    2004-01-01

    In macroeconomic forecasting, in spite of its important role in price and activity developments, oil prices are usually taken as an exogenous variable, for which assumptions have to be made. This paper evaluates the forecasting performance of futures market prices against the other popular technical procedure, the carry-over assumption. The results suggest that there is almost no difference between opting for futures market prices or using the carry-over assumption for short-term forecasting horizons (up to 12 months), while, for longer-term horizons, they favour the use of futures market prices. However, as futures market prices reflect market expectations for world economic activity, futures oil prices should be adjusted whenever market expectations for world economic growth are different to the values underlying the macroeconomic scenarios, in order to fully ensure the internal consistency of those scenarios. (Author)

  14. The 'revealed preferences' theory: Assumptions and conjectures

    International Nuclear Information System (INIS)

    Green, C.H.

    1983-01-01

    Being kind of intuitive psychology the 'Revealed-Preferences'- theory based approaches towards determining the acceptable risks are a useful method for the generation of hypotheses. In view of the fact that reliability engineering develops faster than methods for the determination of reliability aims the Revealed-Preferences approach is a necessary preliminary help. Some of the assumptions on which the 'Revealed-Preferences' theory is based will be identified and analysed and afterwards compared with experimentally obtained results. (orig./DG) [de

  15. Analysis On Political Speech Of Susilo Bambang Yudhoyono: Common Sense Assumption And Ideology

    Directory of Open Access Journals (Sweden)

    Sayit Abdul Karim

    2015-10-01

    Full Text Available This paper presents an analysis on political speech of Susilo Bambang Yudhoyono (SBY, the former president of Indonesia at the Indonesian conference on “Moving towards sustainability: together we must create the future we want”. Ideologies are closely linked to power and language because using language is the commonest form of social behavior, and the form of social behavior where we rely most on ‘common-sense’ assumptions. The objectives of this study are to discuss the common sense assumption and ideology by means of language use in SBY’s political speech which is mainly grounded in Norman Fairclough’s theory of language and power in critical discourse analysis. There are two main problems of analysis, namely; first, what are the common sense assumption and ideology in Susilo Bambang Yudhoyono’s political speech; and second, how do they relate to each other in the political discourse? The data used in this study was in the form of written text on “moving towards sustainability: together we must create the future we want”. A qualitative descriptive analysis was employed to analyze the common sense assumption and ideology in the written text of Susilo Bambang Yudhoyono’s political speech which was delivered at Riocto entro Convention Center, Rio de Janeiro on June 20, 2012. One dimension of ‘common sense’ is the meaning of words. The results showed that the common sense assumption and ideology conveyed through SBY’s specific words or expressions can significantly explain how political discourse is constructed and affected by the SBY’s rule and position, life experience, and power relations. He used language as a powerful social tool to present his common sense assumption and ideology to convince his audiences and fellow citizens that the future of sustainability has been an important agenda for all people.

  16. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...

  17. Superficial dose distribution in breast for tangential radiation treatment, Monte Carlo evaluation of Eclipse algorithms in case of phantom and patient geometries

    International Nuclear Information System (INIS)

    Chakarova, Roumiana; Gustafsson, Magnus; Bäck, Anna; Drugge, Ninni; Palm, Åsa; Lindberg, Andreas; Berglund, Mattias

    2012-01-01

    Purpose: The aim of this study is to examine experimentally and by the Monte Carlo method the accuracy of the Eclipse Pencil Beam Convolution (PBC) and Analytical Anisotropic Algorithm (AAA) algorithms in the superficial region (0–2 cm) of the breast for tangential photon beams in a phantom case as well as in a number of patient geometries. The aim is also to identify differences in how the patient computer tomography data are handled by the treatment planning system and in the Monte Carlo simulations in order to reduce influences of these effects on the evaluation. Materials and methods: Measurements by thermoluminescent dosimeters and gafchromic film are performed for six MV tangential irradiation of the cylindrical solid water phantom. Tangential treatment of seven patients is investigated considering open beams. Dose distributions are obtained by the Eclipse PBC and AAA algorithms. Monte Carlo calculations are carried out by BEAMnrc/DOSXYZnrc code package. Calculations are performed with a calculation grid of 1.25 × 1.25 × 5 mm 3 for PBC and 2 × 2 × 5 mm 3 for AAA and Monte Carlo, respectively. Dose comparison is performed in both dose and spatial domains by the normalized dose difference method. Results: Experimental profiles from the surface toward the geometrical center of the cylindrical phantom are obtained at the beam entrance and exit as well as laterally. Full dose is received beyond 2 mm in the lateral superficial region and beyond 7 mm at the beam entrance. Good agreement between experimental, Monte Carlo and AAA data is obtained, whereas PBC is seen to underestimate the entrance dose the first 3–4 mm and the lateral dose by more than 5% up to 8 mm depth. In the patient cases considered, AAA and Monte Carlo show agreement within 3% dose and 4 mm spatial tolerance. PBC systematically underestimates the dose at the breast apex. The dimensions of region out of tolerance vary with the local breast shape. Different interpretations of patient

  18. Shattering Man’s Fundamental Assumptions in Don DeLillo’s Falling Man

    OpenAIRE

    Hazim Adnan Hashim; Rosli Bin Talif; Lina Hameed Ali

    2016-01-01

    The present study addresses effects of traumatic events such as the September 11 attacks on victims’ fundamental assumptions. These beliefs or assumptions provide individuals with expectations about the world and their sense of self-worth. Thus, they ground people’s sense of security, stability, and orientation. The September 11 terrorist attacks in the U.S.A. were very tragic for Americans because this fundamentally changed their understandings about many aspects in life. The attacks led man...

  19. Models for waste life cycle assessment: Review of technical assumptions

    DEFF Research Database (Denmark)

    Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky

    2010-01-01

    A number of waste life cycle assessment (LCA) models have been gradually developed since the early 1990s, in a number of countries, usually independently from each other. Large discrepancies in results have been observed among different waste LCA models, although it has also been shown that results...... from different LCA studies can be consistent. This paper is an attempt to identify, review and analyse methodologies and technical assumptions used in various parts of selected waste LCA models. Several criteria were identified, which could have significant impacts on the results......, such as the functional unit, system boundaries, waste composition and energy modelling. The modelling assumptions of waste management processes, ranging from collection, transportation, intermediate facilities, recycling, thermal treatment, biological treatment, and landfilling, are obviously critical when comparing...

  20. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    Rose, Jeremy; Aaen, Ivan; Nielsen, Peter Axel

    2008-01-01

    Thinking about improving the management of software development in software firms is dominated by one approach: the capability maturity model devised and administered at the Software Engineering Institute at Carnegie Mellon University. Though CMM, and its replacement CMMI are widely known and used...... thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...

  1. Commentary: Considering Assumptions in Associations Between Music Preferences and Empathy-Related Responding

    Directory of Open Access Journals (Sweden)

    Susan A O'Neill

    2015-09-01

    Full Text Available This commentary considers some of the assumptions underpinning the study by Clark and Giacomantonio (2015. Their exploratory study examined relationships between young people's music preferences and their cognitive and affective empathy-related responses. First, the prescriptive assumption that music preferences can be measured according to how often an individual listens to a particular music genre is considered within axiology or value theory as a multidimensional construct (general, specific, and functional values. This is followed by a consideration of the causal assumption that if we increase young people's empathy through exposure to prosocial song lyrics this will increase their prosocial behavior. It is suggested that the predictive power of musical preferences on empathy-related responding might benefit from a consideration of the larger pattern of psychological and subjective wellbeing within the context of developmental regulation across ontogeny that involves mutually influential individual—context relations.

  2. New Assumptions to Guide SETI Research

    Science.gov (United States)

    Colombano, S. P.

    2018-01-01

    The recent Kepler discoveries of Earth-like planets offer the opportunity to focus our attention on detecting signs of life and technology in specific planetary systems, but I feel we need to become more flexible in our assumptions. The reason is that, while it is still reasonable and conservative to assume that life is most likely to have originated in conditions similar to ours, the vast time differences in potential evolutions render the likelihood of "matching" technologies very slim. In light of these challenges I propose a more "aggressive"� approach to future SETI exploration in directions that until now have received little consideration.

  3. Questionable assumptions hampered interpretation of a network meta-analysis of primary care depression treatments.

    Science.gov (United States)

    Linde, Klaus; Rücker, Gerta; Schneider, Antonius; Kriston, Levente

    2016-03-01

    We aimed to evaluate the underlying assumptions of a network meta-analysis investigating which depression treatment works best in primary care and to highlight challenges and pitfalls of interpretation under consideration of these assumptions. We reviewed 100 randomized trials investigating pharmacologic and psychological treatments for primary care patients with depression. Network meta-analysis was carried out within a frequentist framework using response to treatment as outcome measure. Transitivity was assessed by epidemiologic judgment based on theoretical and empirical investigation of the distribution of trial characteristics across comparisons. Homogeneity and consistency were investigated by decomposing the Q statistic. There were important clinical and statistically significant differences between "pure" drug trials comparing pharmacologic substances with each other or placebo (63 trials) and trials including a psychological treatment arm (37 trials). Overall network meta-analysis produced results well comparable with separate meta-analyses of drug trials and psychological trials. Although the homogeneity and consistency assumptions were mostly met, we considered the transitivity assumption unjustifiable. An exchange of experience between reviewers and, if possible, some guidance on how reviewers addressing important clinical questions can proceed in situations where important assumptions for valid network meta-analysis are not met would be desirable. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)

    2016-12-09

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.

  5. Measuring Productivity Change without Neoclassical Assumptions: A Conceptual Analysis

    NARCIS (Netherlands)

    B.M. Balk (Bert)

    2008-01-01

    textabstractThe measurement of productivity change (or difference) is usually based on models that make use of strong assumptions such as competitive behaviour and constant returns to scale. This survey discusses the basics of productivity measurement and shows that one can dispense with most if not

  6. Exploring five common assumptions on Attention Deficit Hyperactivity Disorder

    NARCIS (Netherlands)

    Batstra, Laura; Nieweg, Edo H.; Hadders-Algra, Mijna

    The number of children diagnosed with attention deficit hyperactivity disorder (ADHD) and treated with medication is steadily increasing. The aim of this paper was to critically discuss five debatable assumptions on ADHD that may explain these trends to some extent. These are that ADHD (i) causes

  7. Implicit Assumptions in Special Education Policy: Promoting Full Inclusion for Students with Learning Disabilities

    Science.gov (United States)

    Kirby, Moira

    2017-01-01

    Introduction: Everyday millions of students in the United States receive special education services. Special education is an institution shaped by societal norms. Inherent in these norms are implicit assumptions regarding disability and the nature of special education services. The two dominant implicit assumptions evident in the American…

  8. Assumptions behind size-based ecosystem models are realistic

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.

    2016-01-01

    A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed by Fro...... that there is indeed a constructive role for a wide suite of ecosystem models to evaluate fishing strategies in an ecosystem context...

  9. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    Science.gov (United States)

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  10. Does Artificial Neural Network Support Connectivism's Assumptions?

    Science.gov (United States)

    AlDahdouh, Alaa A.

    2017-01-01

    Connectivism was presented as a learning theory for the digital age and connectivists claim that recent developments in Artificial Intelligence (AI) and, more specifically, Artificial Neural Network (ANN) support their assumptions of knowledge connectivity. Yet, very little has been done to investigate this brave allegation. Does the advancement…

  11. Anti-Atheist Bias in the United States: Testing Two Critical Assumptions

    Directory of Open Access Journals (Sweden)

    Lawton K Swan

    2012-02-01

    Full Text Available Decades of opinion polling and empirical investigations have clearly demonstrated a pervasive anti-atheist prejudice in the United States. However, much of this scholarship relies on two critical and largely unaddressed assumptions: (a that when people report negative attitudes toward atheists, they do so because they are reacting specifically to their lack of belief in God; and (b that survey questions asking about attitudes toward atheists as a group yield reliable information about biases against individual atheist targets. To test these assumptions, an online survey asked a probability-based random sample of American adults (N = 618 to evaluate a fellow research participant (“Jordan”. Jordan garnered significantly more negative evaluations when identified as an atheist than when described as religious or when religiosity was not mentioned. This effect did not differ as a function of labeling (“atheist” versus “no belief in God”, or the amount of individuating information provided about Jordan. These data suggest that both assumptions are tenable: nonbelief—rather than extraneous connotations of the word “atheist”—seems to underlie the effect, and participants exhibited a marked bias even when confronted with an otherwise attractive individual.

  12. Sensitivity of the OMI ozone profile retrieval (OMO3PR) to a priori assumptions

    NARCIS (Netherlands)

    Mielonen, T.; De Haan, J.F.; Veefkind, J.P.

    2014-01-01

    We have assessed the sensitivity of the operational OMI ozone profile retrieval (OMO3PR) algorithm to a number of a priori assumptions. We studied the effect of stray light correction, surface albedo assumptions and a priori ozone profiles on the retrieved ozone profile. Then, we studied how to

  13. THE COMPLEX OF ASSUMPTION CATHEDRAL OF THE ASTRAKHAN KREMLIN

    Directory of Open Access Journals (Sweden)

    Savenkova Aleksandra Igorevna

    2016-08-01

    Full Text Available This article is devoted to an architectural and historical analysis of the constructions forming a complex of Assumption Cathedral of the Astrakhan Kremlin, which earlier hasn’t been considered as a subject of special research. Basing on the archival sources, photographic materials, publications and on-site investigations of monuments, the creation history of the complete architectural complex sustained in one style of the Muscovite baroque, unique in its composite construction, is considered. Its interpretation in the all-Russian architectural context is offered. Typological features of single constructions come to light. The typology of the Prechistinsky bell tower has an untypical architectural solution - “hexagonal structure on octagonal and quadrangular structures”. The way of connecting the building of the Cathedral and the chambers by the passage was characteristic of monastic constructions and was exclusively seldom in kremlins, farmsteads and ensembles of city cathedrals. The composite scheme of the Assumption Cathedral includes the Lobnoye Mesto (“the Place of Execution” located on an axis from the West, it is connected with the main building by a quarter-turn with landing. The only prototype of the structure is a Lobnoye Mesto on the Red Square in Moscow. In the article the version about the emergence of the Place of Execution on the basis of earlier existing construction - a tower “the Peal” which is repeatedly mentioned in written sources in connection with S. Razin’s revolt is considered. The metropolitan Sampson, trying to keep the value of the Astrakhan metropolitanate, builds the Assumption Cathedral and the Place of Execution directly appealing to a capital prototype to emphasize the continuity and close connection with Moscow.

  14. I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work

    Science.gov (United States)

    Horodyskyj, L.; Mead, C.; Anbar, A. D.

    2016-12-01

    Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.

  15. Fast ion confinement during high power tangential neutral beam injection into low plasma current discharges on the ISX-B tokamak

    International Nuclear Information System (INIS)

    Carnevali, A.; Scott, S.D.; Neilson, H.; Galloway, M.; Stevens, P.; Thomas, C.E.

    1988-01-01

    The beam ion thermalization process during tangential neutral beam injection in the ISX-B tokamak is investigated. The classical model is tested in co- and counter-injected discharges at low plasma current, a regime where large orbit width excursions enhance the importance of the loss regions. To test the model, experimental charge exchange spectra are compared with the predictions of an orbit following Monte Carlo code. Measurements of beam-plasma neutron emission and measured decay rates of the emission following beam turnoff provide additional information. Good agreement is found between theory and experiment. Furthermore, beam additivity experiments show that, globally, the confinement of beam ions remains classical, independently of the injected beam power. However, some experimental evidence suggests that the fast ion density in the plasma core did not increase with beam power in a way consistent with classical processes. (author). 35 refs, 17 figs, 3 tabs

  16. Leakage-Resilient Circuits without Computational Assumptions

    DEFF Research Database (Denmark)

    Dziembowski, Stefan; Faust, Sebastian

    2012-01-01

    Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage...... on computational assumptions, our results are purely information-theoretic. In particular, we do not make use of public key encryption, which was required in all previous works...... into the model and to construct schemes that are provably secure within them. We design a general compiler that transforms any cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any continual leakage provided that the following three requirements are satisfied...

  17. Being Explicit about Underlying Values, Assumptions and Views when Designing for Children in the IDC Community

    DEFF Research Database (Denmark)

    Skovbjerg, Helle Marie; Bekker, Tilde; Barendregt, Wolmet

    2016-01-01

    In this full-day workshop we want to discuss how the IDC community can make underlying assumptions, values and views regarding children and childhood in making design decisions more explicit. What assumptions do IDC designers and researchers make, and how can they be supported in reflecting......, and intends to share different approaches for uncovering and reflecting on values, assumptions and views about children and childhood in design....

  18. Influence of chemical treatment on dimensional stability of narrow-leaved ash - part one: Tangential swelling

    Directory of Open Access Journals (Sweden)

    Popović Jasmina

    2012-01-01

    Full Text Available Dimensional change in wood occurs with the change in hygroscopic moisture content, as a consequence of available hydroxyl groups in wood constituents, allowing for the hydrogen bonding with water molecules. Various pretreatments of wood material are being frequently applied in the wood processing industry. One of the main effects of such processes is the hydrolysis of hemicelluloses, which is the main carrier of the free hydroxyl groups in wood material. Hence, the influence of water treatment and the acetic acid treatment on dimensional stability of narrow-leaved ash (Fraxinus angustifolia Vahl. ssp. Pannonica Soó & Simon were examined in this paper. Duration of treatments was 1h, 2h, 3h and 4h for both solvents. In addition the acetic acid was separately used in concentrations of 3% and 6%. Dimensional stability of the control (referent and treated sample groups were tested on oven dried samples which were consequently submerged in the distilled water during 32 days. The increase of dimensional stability of narrow-leaved ash was achieved with all of the three treatments (one treatment with water and the two with acetic acid solutions. Simultaneously, it was noticed that the results of water uptake and tangential swelling were not significantly affected by the duration (length of the treatments. [Projekat Ministarstva nauke Republike Srbije, br. TP-031041

  19. VITRECTOMY FOR INTERMEDIATE AGE-RELATED MACULAR DEGENERATION ASSOCIATED WITH TANGENTIAL VITREOMACULAR TRACTION: A CLINICOPATHOLOGIC CORRELATION.

    Science.gov (United States)

    Ziada, Jean; Hagenau, Felix; Compera, Denise; Wolf, Armin; Scheler, Renate; Schaumberger, Markus M; Priglinger, Siegfried G; Schumann, Ricarda G

    2018-03-01

    To describe the morphologic characteristics of the vitreomacular interface in intermediate age-related macular degeneration associated with tangential traction due to premacular membrane formation and to correlate with optical coherence tomography (OCT) findings and clinical data. Premacular membrane specimens were removed sequentially with the internal limiting membrane from 27 eyes of 26 patients with intermediate age-related macular degeneration during standard vitrectomy. Specimens were processed for immunocytochemical staining of epiretinal cells and extracellular matrix components. Ultrastructural analysis was performed using transmission electron microscopy. Spectral domain optical coherence tomography images and patient charts were evaluated in retrospect. Immunocytochemistry revealed hyalocytes and myofibroblasts as predominant cell types. Ultrastructural analysis demonstrated evidence of vitreoschisis in all eyes. Myofibroblasts with contractile properties were observed to span between folds of the internal limiting membrane and vitreous cortex collagen. Retinal pigment epithelial cells or inflammatory cells were not detected. Mean visual acuity (Snellen) showed significant improvement from 20/72 ± 20/36 to 20/41 ± 20/32 (P age-related macular degeneration predominantly consists of vitreous collagen, hyalocytes, and myofibroblasts with contractile properties. Vitreoschisis and vitreous-derived cells appear to play an important role in traction formation of this subgroup of eyes. In patients with intermediate age-related macular degeneration and contractile premacular membrane, release of traction by vitrectomy with internal limiting membrane peeling results in significantly functional and anatomical improvement.

  20. A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States

    Science.gov (United States)

    Ryff, Luiz Carlos

    1996-01-01

    A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.

  1. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  2. The Impact of Beam Deposition on Bootstrap Current of Fast Ion Produced by Neutral Beam Tangential Injection

    International Nuclear Information System (INIS)

    Huang Qian-Hong; Gong Xue-Yu; Lu Xing-Qiang; Yu Jun; Cao Jin-Jia

    2015-01-01

    The density profile of fast ions arising from a tangentially injected diffuse neutral beam in tokamak plasma is calculated. The effects of mean free paths and beam tangency radius on the density profile are discussed under typical HL-2A plasmas parameters. The results show that the profile of fast ions is strongly peaked at the center of the plasma when the mean free path at the maximum deuteron density is larger than the minor radius, while the peak value decreases when the mean free path at the maximum deuteron density is larger than twice that of the minor radius due to the beam transmission loss. Moreover, the bootstrap current of fast ions for various mean free paths at the maximum deuteron density is calculated and its density is proved to be closely related to the deposition of the neutral beam. With the electron return current considered, the net current density obviously decreases. Meanwhile, the peak central fast ion density increases when the beam tangency radius approaches the major radius, and the net bootstrap current increases rapidly with the increasing beam tangency radius. (paper)

  3. Origins and Traditions in Comparative Education: Challenging Some Assumptions

    Science.gov (United States)

    Manzon, Maria

    2018-01-01

    This article questions some of our assumptions about the history of comparative education. It explores new scholarship on key actors and ways of knowing in the field. Building on the theory of the social constructedness of the field of comparative education, the paper elucidates how power shapes our scholarly histories and identities.

  4. Cavitation control on a 2D hydrofoil through a continuous tangential injection of liquid: Experimental study

    Science.gov (United States)

    Timoshevskiy, M. V.; Zapryagaev, I. I.; Pervunin, K. S.; Markovich, D. M.

    2016-10-01

    In the paper, the possibility of active control of a cavitating flow over a 2D hydrofoil that replicates a scaled-down model of high-pressure hydroturbine guide vane (GV) was tested. The flow manipulation was implemented by a continuous tangential liquid injection at different flow rates through a spanwise slot in the foil surface. In experiments, the hydrofoil was placed in the test channel at the attack angle of 9°. Different cavitation conditions were reached by varying the cavitation number and injection velocity. In order to study time dynamics and spatial patterns of partial cavities, high-speed imaging was employed. A PIV method was used to measure the mean and fluctuating velocity fields over the hydrofoil. Hydroacoustic measurements were carried out by means of a pressure transducer to identify spectral characteristics of the cavitating flow. It was found that the present control technique is able to modify the partial cavity pattern (or even totally suppress cavitation) in case of stable sheet cavitation and change the amplitude of pressure pulsations at unsteady regimes. The injection technique makes it also possible to significantly influence the spatial distributions of the mean velocity and its turbulent fluctuations over the GV section for non-cavitating flow and sheet cavitation.

  5. High-pressure size exclusion chromatography analysis of dissolved organic matter isolated by tangential-flow ultra filtration

    Science.gov (United States)

    Everett, C.R.; Chin, Y.-P.; Aiken, G.R.

    1999-01-01

    A 1,000-Dalton tangential-flow ultrafiltration (TFUF) membrane was used to isolate dissolved organic matter (DOM) from several freshwater environments. The TFUF unit used in this study was able to completely retain a polystyrene sulfonate 1,800-Dalton standard. Unaltered and TFUF-fractionated DOM molecular weights were assayed by high-pressure size exclusion chromatography (HPSEC). The weight-averaged molecular weights of the retentates were larger than those of the raw water samples, whereas the filtrates were all significantly smaller and approximately the same size or smaller than the manufacturer-specified pore size of the membrane. Moreover, at 280 nm the molar absorptivity of the DOM retained by the ultrafilter is significantly larger than the material in the filtrate. This observation suggests that most of the chromophoric components are associated with the higher molecular weight fraction of the DOM pool. Multivalent metals in the aqueous matrix also affected the molecular weights of the DOM molecules. Typically, proton-exchanged DOM retentates were smaller than untreated samples. This TFUF system appears to be an effective means of isolating aquatic DOM by size, but the ultimate size of the retentates may be affected by the presence of metals and by configurational properties unique to the DOM phase.

  6. Validity of the isotropic thermal conductivity assumption in supercell lattice dynamics

    Science.gov (United States)

    Ma, Ruiyuan; Lukes, Jennifer R.

    2018-02-01

    Superlattices and nano phononic crystals have attracted significant attention due to their low thermal conductivities and their potential application as thermoelectric materials. A widely used expression to calculate thermal conductivity, presented by Klemens and expressed in terms of the relaxation time by Callaway and Holland, originates from the Boltzmann transport equation. In its most general form, this expression involves a direct summation of the heat current contributions from individual phonons of all wavevectors and polarizations in the first Brillouin zone. In common practice, the expression is simplified by making an isotropic assumption that converts the summation over wavevector to an integral over wavevector magnitude. The isotropic expression has been applied to superlattices and phononic crystals, but its validity for different supercell sizes has not been studied. In this work, the isotropic and direct summation methods are used to calculate the thermal conductivities of bulk Si, and Si/Ge quantum dot superlattices. The results show that the differences between the two methods increase substantially with the supercell size. These differences arise because the vibrational modes neglected in the isotropic assumption provide an increasingly important contribution to the thermal conductivity for larger supercells. To avoid the significant errors that can result from the isotropic assumption, direct summation is recommended for thermal conductivity calculations in superstructures.

  7. The Solar Neighborhood. XLII. Parallax Results from the CTIOPI 0.9 m Program—Identifying New Nearby Subdwarfs Using Tangential Velocities and Locations on the H–R Diagram

    Science.gov (United States)

    Jao, Wei-Chun; Henry, Todd J.; Winters, Jennifer G.; Subasavage, John P.; Riedel, Adric R.; Silverstein, Michele L.; Ianna, Philip A.

    2017-11-01

    Parallaxes, proper motions, and optical photometry are presented for 51 systems consisting of 37 cool subdwarf and 14 additional high proper motion systems. Thirty-seven systems have parallaxes reported for the first time, 15 of which have proper motions of at least 1″ yr‑1. The sample includes 22 newly identified cool subdwarfs within 100 pc, of which three are within 25 pc, and an additional five subdwarfs from 100 to 160 pc. Two systems—LSR 1610-0040 AB and LHS 440 AB—are close binaries exhibiting clear astrometric perturbations that will ultimately provide important masses for cool subdwarfs. We use the accurate parallaxes and proper motions provided here, combined with additional data from our program and others, to determine that effectively all nearby stars with tangential velocities greater than 200 km s‑1 are subdwarfs. We compare a sample of 167 confirmed cool subdwarfs to nearby main sequence dwarfs and Pleiades members on an observational Hertzsprung–Russell diagram using M V versus (V ‑ K s ) to map trends of age and metallicity. We find that subdwarfs are clearly separated for spectral types K5–M5, indicating that the low metallicities of subdwarfs set them apart in the H–R diagram for (V ‑ K s ) = 3–6. We then apply the tangential velocity cutoff and the subdwarf region of the H–R diagram to stars with parallaxes from Gaia Data Release 1 and the MEarth Project to identify a total of 29 new nearby subdwarf candidates that fall clearly below the main sequence.

  8. South Atlantic Ocean circulation: Simulation experiments with a quasi-geostrophic model and assimilation of TOPEX/POSEIDON and ERS 1 altimeter data

    Science.gov (United States)

    Florenchie, P.; Verron, J.

    1998-10-01

    Simulation experiments of South Atlantic Ocean circulations are conducted with a 1/6°, four-layered, quasi-geostrophic model. By means of a simple nudging data assimilation procedure along satellite tracks, TOPEX/POSEIDON and ERS 1 altimeter measurements are introduced into the model to control the simulation of the basin-scale circulation for the period from October 1992 to September 1994. The model circulation appears to be strongly influenced by the introduction of altimeter data, offering a consistent picture of South Atlantic Ocean circulations. Comparisons with observations show that the assimilating model successfully simulates the kinematic behavior of a large number of surface circulation components. The assimilation procedure enables us to produce schematic diagrams of South Atlantic circulation in which patterns ranging from basin-scale currents to mesoscale eddies are portrayed in a realistic way, with respect to their complexity. The major features of the South Atlantic circulation are described and analyzed, with special emphasis on the Brazil-Malvinas Confluence region, the Subtropical Gyre with the formation of frontal structures, and the Agulhas Retroflection. The Agulhas eddy-shedding process has been studied extensively. Fourteen eddies appear to be shed during the 2-year experiment. Because of their strong surface topographic signature, Agulhas eddies have been tracked continuously during the assimilation experiment as they cross the South Atlantic basin westward. Other effects of the assimilation procedure are shown, such as the intensification of the Subtropical Gyre, the appearance of a strong seasonal cycle in the Brazil Current transport, and the increase of the mean Brazil Current transport. This last result, combined with the westward oriention of the Agulhas eddies' trajectories, leads to a southward transport of mean eddy kinetic energy across 30°S.

  9. Behavioural assumptions in labour economics: Analysing social security reforms and labour market transitions

    OpenAIRE

    van Huizen, T.M.

    2012-01-01

    The aim of this dissertation is to test behavioural assumptions in labour economics models and thereby improve our understanding of labour market behaviour. The assumptions under scrutiny in this study are derived from an analysis of recent influential policy proposals: the introduction of savings schemes in the system of social security. A central question is how this reform will affect labour market incentives and behaviour. Part I (Chapter 2 and 3) evaluates savings schemes. Chapter 2 exam...

  10. Evaluating growth assumptions using diameter or radial increments in natural even-aged longleaf pine

    Science.gov (United States)

    John C. Gilbert; Ralph S. Meldahl; Jyoti N. Rayamajhi; John S. Kush

    2010-01-01

    When using increment cores to predict future growth, one often assumes future growth is identical to past growth for individual trees. Once this assumption is accepted, a decision has to be made between which growth estimate should be used, constant diameter growth or constant basal area growth. Often, the assumption of constant diameter growth is used due to the ease...

  11. Fair-sampling assumption is not necessary for testing local realism

    International Nuclear Information System (INIS)

    Berry, Dominic W.; Jeong, Hyunseok; Stobinska, Magdalena; Ralph, Timothy C.

    2010-01-01

    Almost all Bell inequality experiments to date have used postselection and therefore relied on the fair sampling assumption for their interpretation. The standard form of the fair sampling assumption is that the loss is independent of the measurement settings, so the ensemble of detected systems provides a fair statistical sample of the total ensemble. This is often assumed to be needed to interpret Bell inequality experiments as ruling out hidden-variable theories. Here we show that it is not necessary; the loss can depend on measurement settings, provided the detection efficiency factorizes as a function of the measurement settings and any hidden variable. This condition implies that Tsirelson's bound must be satisfied for entangled states. On the other hand, we show that it is possible for Tsirelson's bound to be violated while the Clauser-Horne-Shimony-Holt (CHSH)-Bell inequality still holds for unentangled states, and present an experimentally feasible example.

  12. Robust optimization methods for cardiac sparing in tangential breast IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8 (Canada); Lee, Jenny [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Chan, Timothy C. Y. [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada); Purdie, Thomas G. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada)

    2015-05-15

    Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructed using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the

  13. Validation/Uncertainty Quantification for Large Eddy Simulations of the heat flux in the Tangentially Fired Oxy-Coal Alstom Boiler Simulation Facility

    Energy Technology Data Exchange (ETDEWEB)

    Smith, P.J.; Eddings, E.G.; Ring, T.; Thornock, J.; Draper, T.; Isaac, B.; Rezeai, D.; Toth, P.; Wu, Y.; Kelly, K.

    2014-08-01

    The objective of this task is to produce predictive capability with quantified uncertainty bounds for the heat flux in commercial-scale, tangentially fired, oxy-coal boilers. Validation data came from the Alstom Boiler Simulation Facility (BSF) for tangentially fired, oxy-coal operation. This task brings together experimental data collected under Alstom’s DOE project for measuring oxy-firing performance parameters in the BSF with this University of Utah project for large eddy simulation (LES) and validation/uncertainty quantification (V/UQ). The Utah work includes V/UQ with measurements in the single-burner facility where advanced strategies for O2 injection can be more easily controlled and data more easily obtained. Highlights of the work include: • Simulations of Alstom’s 15 megawatt (MW) BSF, exploring the uncertainty in thermal boundary conditions. A V/UQ analysis showed consistency between experimental results and simulation results, identifying uncertainty bounds on the quantities of interest for this system (Subtask 9.1) • A simulation study of the University of Utah’s oxy-fuel combustor (OFC) focused on heat flux (Subtask 9.2). A V/UQ analysis was used to show consistency between experimental and simulation results. • Measurement of heat flux and temperature with new optical diagnostic techniques and comparison with conventional measurements (Subtask 9.3). Various optical diagnostics systems were created to provide experimental data to the simulation team. The final configuration utilized a mid-wave infrared (MWIR) camera to measure heat flux and temperature, which was synchronized with a high-speed, visible camera to utilize two-color pyrometry to measure temperature and soot concentration. • Collection of heat flux and temperature measurements in the University of Utah’s OFC for use is subtasks 9.2 and 9.3 (Subtask 9.4). Several replicates were carried to better assess the experimental error. Experiments were specifically designed for the

  14. Bank stress testing under different balance sheet assumptions

    OpenAIRE

    Busch, Ramona; Drescher, Christian; Memmel, Christoph

    2017-01-01

    Using unique supervisory survey data on the impact of a hypothetical interest rate shock on German banks, we analyse price and quantity effects on banks' net interest margin components under different balance sheet assumptions. In the first year, the cross-sectional variation of banks' simulated price effect is nearly eight times as large as the one of the simulated quantity effect. After five years, however, the importance of both effects converges. Large banks adjust their balance sheets mo...

  15. Are Prescription Opioids Driving the Opioid Crisis? Assumptions vs Facts.

    Science.gov (United States)

    Rose, Mark Edmund

    2018-04-01

    Sharp increases in opioid prescriptions, and associated increases in overdose deaths in the 2000s, evoked widespread calls to change perceptions of opioid analgesics. Medical literature discussions of opioid analgesics began emphasizing patient and public health hazards. Repetitive exposure to this information may influence physician assumptions. While highly consequential to patients with pain whose function and quality of life may benefit from opioid analgesics, current assumptions about prescription opioid analgesics, including their role in the ongoing opioid overdose epidemic, have not been scrutinized. Information was obtained by searching PubMed, governmental agency websites, and conference proceedings. Opioid analgesic prescribing and associated overdose deaths both peaked around 2011 and are in long-term decline; the sharp overdose increase recorded in 2014 was driven by illicit fentanyl and heroin. Nonmethadone prescription opioid analgesic deaths, in the absence of co-ingested benzodiazepines, alcohol, or other central nervous system/respiratory depressants, are infrequent. Within five years of initial prescription opioid misuse, 3.6% initiate heroin use. The United States consumes 80% of the world opioid supply, but opioid access is nonexistent for 80% and severely restricted for 4.1% of the global population. Many current assumptions about opioid analgesics are ill-founded. Illicit fentanyl and heroin, not opioid prescribing, now fuel the current opioid overdose epidemic. National discussion has often neglected the potentially devastating effects of uncontrolled chronic pain. Opioid analgesic prescribing and related overdoses are in decline, at great cost to patients with pain who have benefited or may benefit from, but cannot access, opioid analgesic therapy.

  16. Moving from assumption to observation: Implications for energy and emissions impacts of plug-in hybrid electric vehicles

    International Nuclear Information System (INIS)

    Davies, Jamie; Kurani, Kenneth S.

    2013-01-01

    Plug-in hybrid electric vehicles (PHEVs) are currently for sale in most parts of the United States, Canada, Europe and Japan. These vehicles are promoted as providing distinct consumer and public benefits at the expense of grid electricity. However, the specific benefits or impacts of PHEVs ultimately relies on consumers purchase and vehicle use patterns. While considerable effort has been dedicated to understanding PHEV impacts on a per mile basis few studies have assessed the impacts of PHEV given actual consumer use patterns or operating conditions. Instead, simplifying assumptions have been made about the types of cars individual consumers will choose to purchase and how they will drive and charge them. Here, we highlight some of these consumer purchase and use assumptions, studies which have employed these assumptions and compare these assumptions to actual consumer data recorded in a PHEV demonstration project. Using simulation and hypothetical scenarios we discuss the implication for PHEV impact analyses and policy if assumptions about key PHEV consumer use variables such as vehicle choice, home charging frequency, distribution of driving distances, and access to workplace charging were to change. -- Highlights: •The specific benefits or impacts of PHEVs ultimately relies on consumers purchase and vehicle use patterns. •Simplifying, untested, assumptions have been made by prior studies about PHEV consumer driving, charging and vehicle purchase behaviors. •Some simplifying assumptions do not match observed data from a PHEV demonstration project. •Changing the assumptions about PHEV consumer driving, charging, and vehicle purchase behaviors affects estimates of PHEV impacts. •Premature simplification may have lasting consequences for standard setting and performance based incentive programs which rely on these estimates

  17. Data-driven smooth tests of the proportional hazards assumption

    Czech Academy of Sciences Publication Activity Database

    Kraus, David

    2007-01-01

    Roč. 13, č. 1 (2007), s. 1-16 ISSN 1380-7870 R&D Projects: GA AV ČR(CZ) IAA101120604; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * Neyman's smooth test * proportional hazards assumption * Schwarz's selection rule Subject RIV: BA - General Mathematics Impact factor: 0.491, year: 2007

  18. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Directory of Open Access Journals (Sweden)

    Anne Hsu

    Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  19. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    Science.gov (United States)

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  20. Investigating Teachers' and Students' Beliefs and Assumptions about CALL Programme at Caledonian College of Engineering

    Science.gov (United States)

    Ali, Holi Ibrahim Holi

    2012-01-01

    This study is set to investigate students' and teachers' perceptions and assumptions about newly implemented CALL Programme at the School of Foundation Studies, Caledonian College of Engineering, Oman. Two versions of questionnaire were administered to 24 teachers and 90 students to collect their beliefs and assumption about CALL programame. The…

  1. The Metatheoretical Assumptions of Literacy Engagement: A Preliminary Centennial History

    Science.gov (United States)

    Hruby, George G.; Burns, Leslie D.; Botzakis, Stergios; Groenke, Susan L.; Hall, Leigh A.; Laughter, Judson; Allington, Richard L.

    2016-01-01

    In this review of literacy education research in North America over the past century, the authors examined the historical succession of theoretical frameworks on students' active participation in their own literacy learning, and in particular the metatheoretical assumptions that justify those frameworks. The authors used "motivation" and…

  2. On the effect of a tangential discontinuity on ions specularly reflected at an oblique shock

    International Nuclear Information System (INIS)

    Burgess, D.

    1989-01-01

    In seeking to explain the events observed close to the Earth's bow shock known as hot, diamagnetic cavities (HDC), or active current sheets (ACS), attention has focused on the microphysics of the interaction of a magnetic field directional discontinuity and a collisionless, supercritical shock. Here the author investigates the case of a tangential discontinuity (TD) convecting into a shock at some arbitrary angle. As a first stage he adopted an approach in which test particles represent ions specularly reflected at the shock front. Widely different behavior is possible depending on the sense of ion gyration relative to the TD. Particles can be injected into the plane of the TD so that they travel upstream trapped close to the TD. This implies that ACS events, presumed to be the result of the interaction of the solar wind with a large density reflected component, are detached from the bow shock. For other geometries, ions interact with the TD but stay close to the shock, implying that ACS events are modifications of the shock. The TD can deprive a limited spatial region of a downstream reflected gyrating ion population (necessary for the quasi-perpendicular supercritical shock to be steady), and so he could anticipate where the shock will not be in equilibrium, and consequently where strong reflection may occur. The detailed behavior of the shock in such a situation must be investigated with self-consistent simulations

  3. LONGITUDINAL RESIDUAL AND TANGENTIAL STRAIN (LRS and LRT IN SIX Eucalyptus spp. CLONES

    Directory of Open Access Journals (Sweden)

    Paulo Fernando Trugilho

    2006-09-01

    Full Text Available The species of Eucalyptus genus present high levels of growth stress. These stresses are mechanical efforts generated during the tree growth to help maintaining the balance of the cup, in response to environmental (light, wind and inclination of the land and silvicultural agents (pruning, thinning and planting density. The growth stresses are responsible for the cracks of tops, in logs and boards, and for the warp after the breaking down. This research evaluated the level of growth stress, measured by the longitudinal residual and tangential strain (DRL and DRT, around the circumference of the trunks of alive trees of six clones of Eucalyptus spp., at the age of 10.5 years, and verified the effect of the planting parcel. The clones belong to VMM-AGRO, and they are coming from a clonal test area implanted in the Bom Sucesso farm, located in Vazante-MG. For evaluating the experiment, the model adopted was the completely randomized one disposed in factorial outline with two factors (clone and portion in three repetitions. The results indicated that the average LRS was 0.093 mm and that the average LRT was 0.025 mm. It was verified that, for LRS, the clone effects and planting parcel were significant, while the interaction effect was not significant. For LRT the parcel and interaction effect were significant, while clone effect was not significant. Clones 44, 58 and 47 presented the smallest levels and better distributions of LRS, while, the clones 27, 44 and 58 presented the highest LRS levels. The clones 44 and 58 presented the best distribution and the smallest level of growth stress and may be considered potentially apt for producing sawn wood or solid wood.

  4. Testing the rationality assumption using a design difference in the TV game show 'Jeopardy'

    OpenAIRE

    Sjögren Lindquist, Gabriella; Säve-Söderbergh, Jenny

    2006-01-01

    Abstract This paper empirically investigates the rationality assumption commonly applied in economic modeling by exploiting a design difference in the game-show Jeopardy between the US and Sweden. In particular we address the assumption of individuals’ capabilities to process complex mathematical problems to find optimal strategies. The vital difference is that US contestants are given explicit information before they act, while Swedish contestants individually need to calculate the same info...

  5. Assumptions for the Annual Energy Outlook 1992

    International Nuclear Information System (INIS)

    1992-01-01

    This report serves a auxiliary document to the Energy Information Administration (EIA) publication Annual Energy Outlook 1992 (AEO) (DOE/EIA-0383(92)), released in January 1992. The AEO forecasts were developed for five alternative cases and consist of energy supply, consumption, and price projections by major fuel and end-use sector, which are published at a national level of aggregation. The purpose of this report is to present important quantitative assumptions, including world oil prices and macroeconomic growth, underlying the AEO forecasts. The report has been prepared in response to external requests, as well as analyst requirements for background information on the AEO and studies based on the AEO forecasts

  6. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid

  7. The fall of the Northern Unicorn: tangential motions in the Galactic anticentre with SDSS and Gaia

    Science.gov (United States)

    de Boer, T. J. L.; Belokurov, V.; Koposov, S. E.

    2018-01-01

    We present the first detailed study of the behaviour of the stellar proper motion across the entire Galactic anticentre area visible in the Sloan Digital Sky Survey (SDSS) data. We use recalibrated SDSS astrometry in combination with positions from Gaia DR1 to provide tangential motion measurements with a systematic uncertainty <5 km s-1 for the Main Sequence stars at the distance of the Monoceros Ring. We demonstrate that Monoceros members rotate around the Galaxy with azimuthal speeds of ∼230 km s-1, only slightly lower than that of the Sun. Additionally, both vertical and azimuthal components of their motion are shown to vary considerably but gradually as a function of Galactic longitude and latitude. The stellar overdensity in the anti-centre region can be split into two components, the narrow, stream-like ACS and the smooth Ring. According to our analysis, these two structures show very similar but clearly distinct kinematic trends, which can be summarized as follows: the amplitude of the velocity variation in vϕ and vz in the ACS is higher compared to the Ring, whose velocity gradients appear to be flatter. Currently, no model available can explain the entirety of the data in this area of the sky. However, the new accurate kinematic map introduced here should provide strong constraints on the genesis of the Monoceros Ring and the associated substructure.

  8. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-05-23

    Protein-protein interactions are critically dependent on just a few residues (“hot spots”) at the interfaces. Hot spots make a dominant contribution to the binding free energy and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there exists a need for accurate and reliable computational hot spot prediction methods. Compared to the supervised hot spot prediction algorithms, the semi-supervised prediction methods can take into consideration both the labeled and unlabeled residues in the dataset during the prediction procedure. The transductive support vector machine has been utilized for this task and demonstrated a better prediction performance. To the best of our knowledge, however, none of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue prediction, by considering all the three semisupervised assumptions using nonlinear models. Our algorithm, IterPropMCS, works in an iterative manner. In each iteration, the algorithm first propagates the labels of the labeled residues to the unlabeled ones, along the shortest path between them on a graph, assuming that they lie on a nonlinear manifold. Then it selects the most confident residues as the labeled ones for the next iteration, according to the cluster and smoothness criteria, which is implemented by a nonlinear density estimator. Experiments on a benchmark dataset, using protein structure-based features, demonstrate that our approach is effective in predicting hot spots and compares favorably to other available methods. The results also show that our method outperforms the state-of-the-art transductive learning methods.

  9. Automatic ethics: the effects of implicit assumptions and contextual cues on moral behavior.

    Science.gov (United States)

    Reynolds, Scott J; Leavitt, Keith; DeCelles, Katherine A

    2010-07-01

    We empirically examine the reflexive or automatic aspects of moral decision making. To begin, we develop and validate a measure of an individual's implicit assumption regarding the inherent morality of business. Then, using an in-basket exercise, we demonstrate that an implicit assumption that business is inherently moral impacts day-to-day business decisions and interacts with contextual cues to shape moral behavior. Ultimately, we offer evidence supporting a characterization of employees as reflexive interactionists: moral agents whose automatic decision-making processes interact with the environment to shape their moral behavior.

  10. A critical assessment of the ecological assumptions underpinning compensatory mitigation of salmon-derived nutrients

    Science.gov (United States)

    Collins, Scott F.; Marcarelli, Amy M.; Baxter, Colden V.; Wipfli, Mark S.

    2015-01-01

    We critically evaluate some of the key ecological assumptions underpinning the use of nutrient replacement as a means of recovering salmon populations and a range of other organisms thought to be linked to productive salmon runs. These assumptions include: (1) nutrient mitigation mimics the ecological roles of salmon, (2) mitigation is needed to replace salmon-derived nutrients and stimulate primary and invertebrate production in streams, and (3) food resources in rearing habitats limit populations of salmon and resident fishes. First, we call into question assumption one because an array of evidence points to the multi-faceted role played by spawning salmon, including disturbance via redd-building, nutrient recycling by live fish, and consumption by terrestrial consumers. Second, we show that assumption two may require qualification based upon a more complete understanding of nutrient cycling and productivity in streams. Third, we evaluate the empirical evidence supporting food limitation of fish populations and conclude it has been only weakly tested. On the basis of this assessment, we urge caution in the application of nutrient mitigation as a management tool. Although applications of nutrients and other materials intended to mitigate for lost or diminished runs of Pacific salmon may trigger ecological responses within treated ecosystems, contributions of these activities toward actual mitigation may be limited.

  11. ψ -ontology result without the Cartesian product assumption

    Science.gov (United States)

    Myrvold, Wayne C.

    2018-05-01

    We introduce a weakening of the preparation independence postulate of Pusey et al. [Nat. Phys. 8, 475 (2012), 10.1038/nphys2309] that does not presuppose that the space of ontic states resulting from a product-state preparation can be represented by the Cartesian product of subsystem state spaces. On the basis of this weakened assumption, it is shown that, in any model that reproduces the quantum probabilities, any pair of pure quantum states |ψ >,|ϕ > with ≤1 /√{2 } must be ontologically distinct.

  12. Using Contemporary Art to Challenge Cultural Values, Beliefs, and Assumptions

    Science.gov (United States)

    Knight, Wanda B.

    2006-01-01

    Art educators, like many other educators born or socialized within the main-stream culture of a society, seldom have an opportunity to identify, question, and challenge their cultural values, beliefs, assumptions, and perspectives because school culture typically reinforces those they learn at home and in their communities (Bush & Simmons, 1990).…

  13. Assumptions and Challenges of Open Scholarship

    Directory of Open Access Journals (Sweden)

    George Veletsianos

    2012-10-01

    Full Text Available Researchers, educators, policymakers, and other education stakeholders hope and anticipate that openness and open scholarship will generate positive outcomes for education and scholarship. Given the emerging nature of open practices, educators and scholars are finding themselves in a position in which they can shape and/or be shaped by openness. The intention of this paper is (a to identify the assumptions of the open scholarship movement and (b to highlight challenges associated with the movement’s aspirations of broadening access to education and knowledge. Through a critique of technology use in education, an understanding of educational technology narratives and their unfulfilled potential, and an appreciation of the negotiated implementation of technology use, we hope that this paper helps spark a conversation for a more critical, equitable, and effective future for education and open scholarship.

  14. World assumptions, posttraumatic stress and quality of life after a natural disaster: A longitudinal study

    Science.gov (United States)

    2012-01-01

    Background Changes in world assumptions are a fundamental concept within theories that explain posttraumatic stress disorder. The objective of the present study was to gain a greater understanding of how changes in world assumptions are related to quality of life and posttraumatic stress symptoms after a natural disaster. Methods A longitudinal study of 574 Norwegian adults who survived the Southeast Asian tsunami in 2004 was undertaken. Multilevel analyses were used to identify which factors at six months post-tsunami predicted quality of life and posttraumatic stress symptoms two years post-tsunami. Results Good quality of life and posttraumatic stress symptoms were negatively related. However, major differences in the predictors of these outcomes were found. Females reported significantly higher quality of life and more posttraumatic stress than men. The association between level of exposure to the tsunami and quality of life seemed to be mediated by posttraumatic stress. Negative perceived changes in the assumption “the world is just” were related to adverse outcome in both quality of life and posttraumatic stress. Positive perceived changes in the assumptions “life is meaningful” and “feeling that I am a valuable human” were associated with higher levels of quality of life but not with posttraumatic stress. Conclusions Quality of life and posttraumatic stress symptoms demonstrate differences in their etiology. World assumptions may be less specifically related to posttraumatic stress than has been postulated in some cognitive theories. PMID:22742447

  15. Oil production, oil prices, and macroeconomic adjustment under different wage assumptions

    International Nuclear Information System (INIS)

    Harvie, C.; Maleka, P.T.

    1992-01-01

    In a previous paper one of the authors developed a simple model to try to identify the possible macroeconomic adjustment processes arising in an economy experiencing a temporary period of oil production, under alternative wage adjustment assumptions, namely nominal and real wage rigidity. Certain assumptions were made regarding the characteristics of actual production, the permanent revenues generated from that oil production, and the net exports/imports of oil. The role of the price of oil, and possible changes in that price was essentially ignored. Here we attempt to incorporate the price of oil, as well as changes in that price, in conjunction with the production of oil, the objective being to identify the contribution which the price of oil, and changes in it, make to the adjustment process itself. The emphasis in this paper is not given to a mathematical derivation and analysis of the model's dynamics of adjustment or its comparative statics, but rather to the derivation of simulation results from the model, for a specific assumed case, using a numerical algorithm program, conducive to the type of theoretical framework utilized here. The results presented suggest that although the adjustment profiles of the macroeconomic variables of interest, for either wage adjustment assumption, remain fundamentally the same, the magnitude of these adjustments is increased. Hence to derive a more accurate picture of the dimensions of adjustment of these macroeconomic variables, it is essential to include the price of oil as well as changes in that price. (Author)

  16. Spatial Angular Compounding for Elastography without the Incompressibility Assumption

    OpenAIRE

    Rao, Min; Varghese, Tomy

    2005-01-01

    Spatial-angular compounding is a new technique that enables the reduction of noise artifacts in ultrasound elastography. Previous results using spatial angular compounding, however, were based on the use of the tissue incompressibility assumption. Compounded elastograms were obtained from a spatially-weighted average of local strain estimated from radiofrequency echo signals acquired at different insonification angles. In this paper, we present a new method for reducing the noise artifacts in...

  17. First assumptions and overlooking competing causes of death

    DEFF Research Database (Denmark)

    Leth, Peter Mygind; Andersen, Anh Thao Nguyen

    2014-01-01

    Determining the most probable cause of death is important, and it is sometimes tempting to assume an obvious cause of death, when it readily presents itself, and stop looking for other competing causes of death. The case story presented in the article illustrates this dilemma. The first assumption...... of cause of death, which was based on results from bacteriology tests, proved to be wrong when the results from the forensic toxicology testing became available. This case also illustrates how post mortem computed tomography (PMCT) findings of radio opaque material in the stomach alerted the pathologist...

  18. Radiation hormesis and the linear-no-threshold assumption

    CERN Document Server

    Sanders, Charles L

    2009-01-01

    Current radiation protection standards are based upon the application of the linear no-threshold (LNT) assumption, which considers that even very low doses of ionizing radiation can cause cancer. The radiation hormesis hypothesis, by contrast, proposes that low-dose ionizing radiation is beneficial. In this book, the author examines all facets of radiation hormesis in detail, including the history of the concept and mechanisms, and presents comprehensive, up-to-date reviews for major cancer types. It is explained how low-dose radiation can in fact decrease all-cause and all-cancer mortality an

  19. A Taxonomy of Latent Structure Assumptions for Probability Matrix Decomposition Models.

    Science.gov (United States)

    Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven

    2003-01-01

    Proposed a taxonomy of latent structure assumptions for probability matrix decomposition (PMD) that includes the original PMD model and a three-way extension of the multiple classification latent class model. Simulation study results show the usefulness of the taxonomy. (SLD)

  20. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    Science.gov (United States)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative

  1. Consequences of Violated Equating Assumptions under the Equivalent Groups Design

    Science.gov (United States)

    Lyren, Per-Erik; Hambleton, Ronald K.

    2011-01-01

    The equal ability distribution assumption associated with the equivalent groups equating design was investigated in the context of a selection test for admission to higher education. The purpose was to assess the consequences for the test-takers in terms of receiving improperly high or low scores compared to their peers, and to find strong…

  2. Basic concepts and assumptions behind the new ICRP recommendations

    International Nuclear Information System (INIS)

    Lindell, B.

    1979-01-01

    A review is given of some of the basic concepts and assumptions behind the current recommendations by the International Commission on Radiological Protection in ICRP Publications 26 and 28, which form the basis for the revision of the Basic Safety Standards jointly undertaken by IAEA, ILO, NEA and WHO. Special attention is given to the assumption of a linear, non-threshold dose-response relationship for stochastic radiation effects such as cancer and hereditary harm. The three basic principles of protection are discussed: justification of practice, optimization of protection and individual risk limitation. In the new ICRP recommendations particular emphasis is given to the principle of keeping all radiation doses as low as is reasonably achievable. A consequence of this is that the ICRP dose limits are now given as boundary conditions for the justification and optimization procedures rather than as values that should be used for purposes of planning and design. The fractional increase in total risk at various ages after continuous exposure near the dose limits is given as an illustration. The need for taking other sources, present and future, into account when applying the dose limits leads to the use of the commitment concept. This is briefly discussed as well as the new quantity, the effective dose equivalent, introduced by ICRP. (author)

  3. Has the "Equal Environments" assumption been tested in twin studies?

    Science.gov (United States)

    Eaves, Lindon; Foley, Debra; Silberg, Judy

    2003-12-01

    A recurring criticism of the twin method for quantifying genetic and environmental components of human differences is the necessity of the so-called "equal environments assumption" (EEA) (i.e., that monozygotic and dizygotic twins experience equally correlated environments). It has been proposed to test the EEA by stratifying twin correlations by indices of the amount of shared environment. However, relevant environments may also be influenced by genetic differences. We present a model for the role of genetic factors in niche selection by twins that may account for variation in indices of the shared twin environment (e.g., contact between members of twin pairs). Simulations reveal that stratification of twin correlations by amount of contact can yield spurious evidence of large shared environmental effects in some strata and even give false indications of genotype x environment interaction. The stratification approach to testing the equal environments assumption may be misleading and the results of such tests may actually be consistent with a simpler theory of the role of genetic factors in niche selection.

  4. Bell violation using entangled photons without the fair-sampling assumption.

    Science.gov (United States)

    Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton

    2013-05-09

    The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.

  5. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  6. Evaluation of the dosimetric consequences of adding a single asymmetric or MLC shaped field to a tangential breast radiotherapy technique

    International Nuclear Information System (INIS)

    Richmond, Neil D.; Turner, Robert N.; Dawes, Peter J.D.K.; Lambert, Geoff D.; Lawrence, Gill P.

    2003-01-01

    Fifteen consecutive patients had standard treatment plans generated using our departmental protocol and two further plans produced using either an asymmetric, or MLC shaped additional field, from each tangential direction. The mean percentage of the PTV receiving over 107% of the isocentre dose was 19.8% for the standard planned patients (95% confidence interval 12.3-27.4%). This was reduced to 6.0% for the asymmetric field technique (95% confidence interval 4.1-8.0%) and 5.3% for the MLC technique (95% confidence interval 2.8-7.7%). These high dose volume reductions were therefore significant at the 95% confidence level. It was also concluded that both alternative planning techniques offer the greatest potential when the standard plan indicated that more than 20% of the PTV would receive greater than 107% of the prescribed dose. Under these circumstances the segmented field techniques led to a reduction of at least 15 percentage points in this figure. It is this group of patients who stand to benefit most from application of these simple additional field techniques

  7. Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification

    Science.gov (United States)

    Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.

    2017-12-01

    In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last

  8. Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, E.; Denholm, P.; Margolis, R.

    2013-01-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.

  9. Public-private partnerships to improve primary healthcare surgeries: clarifying assumptions about the role of private provider activities.

    Science.gov (United States)

    Mudyarabikwa, Oliver; Tobi, Patrick; Regmi, Krishna

    2017-07-01

    Aim To examine assumptions about public-private partnership (PPP) activities and their role in improving public procurement of primary healthcare surgeries. PPPs were developed to improve the quality of care and patient satisfaction. However, evidence of their effectiveness in delivering health benefits is limited. A qualitative study design was employed. A total of 25 interviews with public sector staff (n=23) and private sector managers (n=2) were conducted to understand their interpretations of assumptions in the activities of private investors and service contractors participating in Local Improvement Finance Trust (LIFT) partnerships. Realist evaluation principles were applied in the data analysis to interpret the findings. Six thematic areas of assumed health benefits were identified: (i) quality improvement; (ii) improved risk management; (iii) reduced procurement costs; (iv) increased efficiency; (v) community involvement; and (vi) sustainable investment. Primary Care Trusts that chose to procure their surgeries through LIFT were expected to support its implementation by providing an environment conducive for the private participants to achieve these benefits. Private participant activities were found to be based on a range of explicit and tacit assumptions perceived helpful in achieving government objectives for LIFT. The success of PPPs depended upon private participants' (i) capacity to assess how PPP assumptions added value to their activities, (ii) effectiveness in interpreting assumptions in their expected activities, and (iii) preparedness to align their business principles to government objectives for PPPs. They risked missing some of the expected benefits because of some factors constraining realization of the assumptions. The ways in which private participants preferred to carry out their activities also influenced the extent to which expected benefits were achieved. Giving more discretion to public than private participants over critical

  10. Sensitivity Analysis and Bounding of Causal Effects with Alternative Identifying Assumptions

    Science.gov (United States)

    Jo, Booil; Vinokur, Amiram D.

    2011-01-01

    When identification of causal effects relies on untestable assumptions regarding nonidentified parameters, sensitivity of causal effect estimates is often questioned. For proper interpretation of causal effect estimates in this situation, deriving bounds on causal parameters or exploring the sensitivity of estimates to scientifically plausible…

  11. Examination of Conservatism in Ground-level Source Release Assumption when Performing Consequence Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung-yeop; Lim, Ho-Gon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    One of these assumptions frequently assumed is the assumption of ground-level source release. The user manual of a consequence analysis software HotSpot is mentioning like below: 'If you cannot estimate or calculate the effective release height, the actual physical release height (height of the stack) or zero for ground-level release should be used. This will usually yield a conservative estimate, (i.e., larger radiation doses for all downwind receptors, etc).' This recommendation could be agreed in aspect of conservatism but quantitative examination of the effect of this assumption to the result of consequence analysis is necessary. The source terms of Fukushima Dai-ichi NPP accident have been estimated by several studies using inverse modeling and one of the biggest sources of the difference between the results of these studies was different effective source release height assumed by each studies. It supports the importance of the quantitative examination of the influence by release height. Sensitivity analysis of the effective release height of radioactive sources was performed and the influence to the total effective dose was quantitatively examined in this study. Above 20% difference is maintained even at longer distances, when we compare the dose between the result assuming ground-level release and the results assuming other effective plume height. It means that we cannot ignore the influence of ground-level source assumption to the latent cancer fatality estimations. In addition, the assumption of ground-level release fundamentally prevents detailed analysis including diffusion of plume from effective plume height to the ground even though the influence of it is relatively lower in longer distance. When we additionally consider the influence of surface roughness, situations could be more serious. The ground level dose could be highly over-estimated in short downwind distance at the NPP sites which have low surface roughness such as Barakah site in

  12. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    Science.gov (United States)

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  13. Challenging the assumptions for thermal sensation scales

    DEFF Research Database (Denmark)

    Schweiker, Marcel; Fuchs, Xaver; Becker, Susanne

    2016-01-01

    Scales are widely used to assess the personal experience of thermal conditions in built environments. Most commonly, thermal sensation is assessed, mainly to determine whether a particular thermal condition is comfortable for individuals. A seven-point thermal sensation scale has been used...... extensively, which is suitable for describing a one-dimensional relationship between physical parameters of indoor environments and subjective thermal sensation. However, human thermal comfort is not merely a physiological but also a psychological phenomenon. Thus, it should be investigated how scales for its...... assessment could benefit from a multidimensional conceptualization. The common assumptions related to the usage of thermal sensation scales are challenged, empirically supported by two analyses. These analyses show that the relationship between temperature and subjective thermal sensation is non...

  14. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    Science.gov (United States)

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Super learning to hedge against incorrect inference from arbitrary parametric assumptions in marginal structural modeling.

    Science.gov (United States)

    Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J

    2013-08-01

    Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Quantum information versus black hole physics: deep firewalls from narrow assumptions.

    Science.gov (United States)

    Braunstein, Samuel L; Pirandola, Stefano

    2018-07-13

    The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes 'all the way down' in contrast with earlier work describing only a structure at the horizon.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  17. Quantum information versus black hole physics: deep firewalls from narrow assumptions

    Science.gov (United States)

    Braunstein, Samuel L.; Pirandola, Stefano

    2018-07-01

    The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes `all the way down' in contrast with earlier work describing only a structure at the horizon. This article is part of a discussion meeting issue `Foundations of quantum mechanics and their impact on contemporary society'.

  18. The extended evolutionary synthesis: its structure, assumptions and predictions

    Science.gov (United States)

    Laland, Kevin N.; Uller, Tobias; Feldman, Marcus W.; Sterelny, Kim; Müller, Gerd B.; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-01-01

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the ‘extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism–environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. PMID:26246559

  19. Drug policy in sport: hidden assumptions and inherent contradictions.

    Science.gov (United States)

    Smith, Aaron C T; Stewart, Bob

    2008-03-01

    This paper considers the assumptions underpinning the current drugs-in-sport policy arrangements. We examine the assumptions and contradictions inherent in the policy approach, paying particular attention to the evidence that supports different policy arrangements. We find that the current anti-doping policy of the World Anti-Doping Agency (WADA) contains inconsistencies and ambiguities. WADA's policy position is predicated upon four fundamental principles; first, the need for sport to set a good example; secondly, the necessity of ensuring a level playing field; thirdly, the responsibility to protect the health of athletes; and fourthly, the importance of preserving the integrity of sport. A review of the evidence, however, suggests that sport is a problematic institution when it comes to setting a good example for the rest of society. Neither is it clear that sport has an inherent or essential integrity that can only be sustained through regulation. Furthermore, it is doubtful that WADA's anti-doping policy is effective in maintaining a level playing field, or is the best means of protecting the health of athletes. The WADA anti-doping policy is based too heavily on principals of minimising drug use, and gives insufficient weight to the minimisation of drug-related harms. As a result drug-related harms are being poorly managed in sport. We argue that anti-doping policy in sport would benefit from placing greater emphasis on a harm minimisation model.

  20. Unconditionally Secure and Universally Composable Commitments from Physical Assumptions

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Scafuro, Alessandra

    2013-01-01

    We present a constant-round unconditional black-box compiler that transforms any ideal (i.e., statistically-hiding and statistically-binding) straight-line extractable commitment scheme, into an extractable and equivocal commitment scheme, therefore yielding to UC-security [9]. We exemplify the u...... of unconditional UC-security with (malicious) PUFs and stateless tokens, our compiler can be instantiated with any ideal straight-line extractable commitment scheme, thus allowing the use of various setup assumptions which may better fit the application or the technology available....

  1. Washington International Renewable Energy Conference (WIREC) 2008 Pledges. Methodology and Assumptions Summary

    Energy Technology Data Exchange (ETDEWEB)

    Babiuch, Bill [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bilello, Daniel E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cowlin, Shannon C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wise, Alison [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2008-08-01

    This report describes the methodology and assumptions used by NREL in quantifying the potential CO2 reductions resulting from more than 140 governments, international organizations, and private-sector representatives pledging to advance the uptake of renewable energy.

  2. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  3. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  4. Halo-independent direct detection analyses without mass assumptions

    International Nuclear Information System (INIS)

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew

    2015-01-01

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m χ −σ n plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v min −g-tilde plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v min to nuclear recoil momentum (p R ), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call h-til-tilde(p R ). The entire family of conventional halo-independent g-tilde(v min ) plots for all DM masses are directly found from the single h-tilde(p R ) plot through a simple rescaling of axes. By considering results in h-tilde(p R ) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple g-tilde(v min ) plots for different DM masses. We conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity

  5. Marking and Moderation in the UK: False Assumptions and Wasted Resources

    Science.gov (United States)

    Bloxham, Sue

    2009-01-01

    This article challenges a number of assumptions underlying marking of student work in British universities. It argues that, in developing rigorous moderation procedures, we have created a huge burden for markers which adds little to accuracy and reliability but creates additional work for staff, constrains assessment choices and slows down…

  6. What's Love Got to Do with It? Rethinking Common Sense Assumptions

    Science.gov (United States)

    Trachman, Matthew; Bluestone, Cheryl

    2005-01-01

    One of the most basic tasks in introductory social science classes is to get students to reexamine their common sense assumptions concerning human behavior. This article introduces a shared assignment developed for a learning community that paired an introductory sociology and psychology class. The assignment challenges students to rethink the…

  7. Estimation of the energy loss at the blades in rowing: common assumptions revisited.

    Science.gov (United States)

    Hofmijster, Mathijs; De Koning, Jos; Van Soest, A J

    2010-08-01

    In rowing, power is inevitably lost as kinetic energy is imparted to the water during push-off with the blades. Power loss is estimated from reconstructed blade kinetics and kinematics. Traditionally, it is assumed that the oar is completely rigid and that force acts strictly perpendicular to the blade. The aim of the present study was to evaluate how reconstructed blade kinematics, kinetics, and average power loss are affected by these assumptions. A calibration experiment with instrumented oars and oarlocks was performed to establish relations between measured signals and oar deformation and blade force. Next, an on-water experiment was performed with a single female world-class rower rowing at constant racing pace in an instrumented scull. Blade kinematics, kinetics, and power loss under different assumptions (rigid versus deformable oars; absence or presence of a blade force component parallel to the oar) were reconstructed. Estimated power losses at the blades are 18% higher when parallel blade force is incorporated. Incorporating oar deformation affects reconstructed blade kinematics and instantaneous power loss, but has no effect on estimation of power losses at the blades. Assumptions on oar deformation and blade force direction have implications for the reconstructed blade kinetics and kinematics. Neglecting parallel blade forces leads to a substantial underestimation of power losses at the blades.

  8. Assumptions of Corporate Social Responsibility as Competitiveness Factor

    Directory of Open Access Journals (Sweden)

    Zaneta Simanaviciene

    2017-09-01

    Full Text Available The purpose of this study was to examine the assumptions of corporate social responsibility (CSR as competitiveness factor in economic downturn. Findings indicate that factors affecting the quality of the micro-economic business environment, i.e., the sophistication of enterprise’s strategy and management processes, the quality of the human capital resources, the increase of product / service demand, the development of related and supporting sectors and the efficiency of natural resources, and competitive capacities of enterprise impact competitiveness at a micro-level. The outcomes suggest that the implementation of CSR elements, i.e., economic, environmental and social responsibilities, gives good opportunities to increase business competitiveness.

  9. Tangential flow ultrafiltration for detection of white spot syndrome virus (WSSV) in shrimp pond water.

    Science.gov (United States)

    Alavandi, S V; Ananda Bharathi, R; Satheesh Kumar, S; Dineshkumar, N; Saravanakumar, C; Joseph Sahaya Rajan, J

    2015-06-15

    Water represents the most important component in the white spot syndrome virus (WSSV) transmission pathway in aquaculture, yet there is very little information. Detection of viruses in water is a challenge, since their counts will often be too low to be detected by available methods such as polymerase chain reaction (PCR). In order to overcome this difficulty, viruses in water have to be concentrated from large volumes of water prior to detection. In this study, a total of 19 water samples from aquaculture ecosystem comprising 3 creeks, 10 shrimp culture ponds, 3 shrimp broodstock tanks and 2 larval rearing tanks of shrimp hatcheries and a sample from a hatchery effluent treatment tank were subjected to concentration of viruses by ultrafiltration (UF) using tangential flow filtration (TFF). Twenty to 100l of water from these sources was concentrated to a final volume of 100mL (200-1000 fold). The efficiency of recovery of WSSV by TFF ranged from 7.5 to 89.61%. WSSV could be successfully detected by PCR in the viral concentrates obtained from water samples of three shrimp culture ponds, one each of the shrimp broodstock tank, larval rearing tank, and the shrimp hatchery effluent treatment tank with WSSV copy numbers ranging from 6 to 157mL(-1) by quantitative real time PCR. The ultrafiltration virus concentration technique enables efficient detection of shrimp viral pathogens in water from aquaculture facilities. It could be used as an important tool to understand the efficacy of biosecurity protocols adopted in the aquaculture facility and to carry out epidemiological investigations of aquatic viral pathogens. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Who needs the assumption of opportunistic behavior? Transaction cost economics does not!

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2000-01-01

    The assumption of opportunistic behavior, familiar from transaction cost economics, has been and remains highly controversial. But opportunistic behavior, albeit undoubtedly an extremely important form of motivation, is not a necessary condition for the contractual problems studied by transaction...

  11. Tale of Two Courthouses: A Critique of the Underlying Assumptions in Chronic Disease Self-Management for Aboriginal People

    Directory of Open Access Journals (Sweden)

    Isabelle Ellis

    2009-12-01

    Full Text Available This article reviews the assumptions that underpin thecommonly implemented Chronic Disease Self-Managementmodels. Namely that there are a clear set of instructions forpatients to comply with, that all health care providers agreewith; and that the health care provider and the patient agreewith the chronic disease self-management plan that wasdeveloped as part of a consultation. These assumptions areevaluated for their validity in the remote health care context,particularly for Aboriginal people. These assumptions havebeen found to lack validity in this context, therefore analternative model to enhance chronic disease care isproposed.

  12. An Evaluation of 700 mb Aircraft Reconnaissance Data for Selected Northwest Pacific Tropical Cyclones.

    Science.gov (United States)

    1983-09-01

    ccesearch flights inte both Atlantic and ncr-.hwust Pacific tropical cyclones. Infcrmation providal by these studies expanded and, in some cases, altered...This assumption iaplies t at the curl of the tangential frictional drag is equal to zero. This further implies that the partial derivative of the sur...20) at 30 NM1, prior to the period of most rapidl deepening, Is reflecti at 60 NNl, and possibly at 90 NMl. In the case of super typhoon. rip (Fig

  13. Stable isotopes and elasmobranchs: tissue types, methods, applications and assumptions.

    Science.gov (United States)

    Hussey, N E; MacNeil, M A; Olin, J A; McMeans, B C; Kinney, M J; Chapman, D D; Fisk, A T

    2012-04-01

    Stable-isotope analysis (SIA) can act as a powerful ecological tracer with which to examine diet, trophic position and movement, as well as more complex questions pertaining to community dynamics and feeding strategies or behaviour among aquatic organisms. With major advances in the understanding of the methodological approaches and assumptions of SIA through dedicated experimental work in the broader literature coupled with the inherent difficulty of studying typically large, highly mobile marine predators, SIA is increasingly being used to investigate the ecology of elasmobranchs (sharks, skates and rays). Here, the current state of SIA in elasmobranchs is reviewed, focusing on available tissues for analysis, methodological issues relating to the effects of lipid extraction and urea, the experimental dynamics of isotopic incorporation, diet-tissue discrimination factors, estimating trophic position, diet and mixing models and individual specialization and niche-width analyses. These areas are discussed in terms of assumptions made when applying SIA to the study of elasmobranch ecology and the requirement that investigators standardize analytical approaches. Recommendations are made for future SIA experimental work that would improve understanding of stable-isotope dynamics and advance their application in the study of sharks, skates and rays. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  14. SOME CONCEPTIONS AND MISCONCEPTIONS ON REALITY AND ASSUMPTIONS IN FINANCIAL ACCOUNTING

    OpenAIRE

    Stanley C. W. Salvary

    2005-01-01

    This paper addresses two problematic issues arising from the importation of terms into financial accounting: (1) the nature of economic reality; and (2) the role of assumptions. These two issues have stirred a lot of controversy relating to financial accounting measurements and affect attestation reports. This paper attempts to provide conceptual clarity on these two issues.

  15. Parametric Raman crystalline anti-Stokes laser at 503 nm with collinear beam interaction at tangential phase matching

    Science.gov (United States)

    Smetanin, S. N.; Jelínek, M.; Kubeček, V.

    2017-07-01

    Stimulated-Raman-scattering in crystals can be used for the single-pass frequency-conversion to the Stokes-shifted wavelengths. The anti-Stokes shift can also be achieved but the phase-matching condition has to be fulfilled because of the parametric four-wave mixing process. To widen the angular-tolerance of four-wave mixing and to obtain high-conversion-efficiency into the anti-Stokes, we developed a new scheme of the parametric Raman anti-Stokes laser at 503 nm with phase-matched collinear beam interaction of orthogonally-polarized Raman components in calcite oriented at the phase-matched angle under 532 nm 20 ps laser excitation. The excitation laser beam was split into two orthogonally-polarized components entering the calcite at the certain incidence angles to fulfill the nearly collinear phase-matching and also to compensate walk-off of extraordinary waves for collinear beam interaction. The phase matching of parametric Raman interaction is tangential and insensitive to the angular mismatch if the Poynting vectors of the biharmonic pump and parametrically generated (anti-Stokes) waves are collinear. For the first time it allows to achieve experimentally the highest conversion efficiency into the anti-Stokes wave (503 nm) up to 30% from the probe wave and up to 3.5% from both pump and probe waves in the single-pass picosecond parametric calcite Raman laser. The highest anti-Stokes pulse energy was 1.4 μJ.

  16. Optimal Positioning of the Nipple-Areola Complex in Men Using the Mohrenheim-Estimated-Tangential-Tracking-Line (METT-Line): An Intuitive Approach.

    Science.gov (United States)

    Mett, Tobias R; Krezdorn, Nicco; Luketina, Rosalia; Boyce, Maria K; Henseler, Helga; Ipaktchi, Ramin; Vogt, Peter M

    2017-12-01

    The reconstruction of the body shape after post-bariatric surgery or high-grade gynecomastia involves, besides skin tightening, the repositioning of anatomical, apparent landmarks. The surgeon usually defines these during the preoperative planning. In particular, the positions of the nipple-areola complexes (NAC) should contribute to the gender-appropriate appearance. While in the female breast numerous methods have been developed to determine the optimal position of the NACs, there are only a few, metric and often impractical algorithms for positioning the nipples and areoles in the male. With this study, we show the accuracy of the intuitive positioning of the nipple-areola complex in men. From a pre-examined and measured quantity of 10 young and healthy men, six subjects were selected, which corresponded, on the basis of their chest and trunk dimensions, to the average of known data from the literature. The photographed frontal views were retouched in two steps. Initially, only the NACs were removed and the chest contours were left. In a second step, all contours and the navel were blurred. These pictures were submitted to resident and consultant plastic surgeons, who were asked to draw the missing NACs without any tools. The original positions of the nipples were compared with the inscriptions. Furthermore, the results were compared between the contoured and completely retouched pictures and between the residents and consultants. A total of 8 consultants and 7 residents were included. In the contoured and completely retouched images, a significant deviation of the marked positions of the missing features was found. The height of the NAC was determined somewhat more precisely than the vertical position. There was no significant difference between the contoured and completely retouched images, with a discretely more accurate tendency on the contoured images. In comparison with the professional experience, the consultants were tangentially more precise, but

  17. Questioning the "big assumptions". Part I: addressing personal contradictions that impede professional development.

    Science.gov (United States)

    Bowe, Constance M; Lahey, Lisa; Armstrong, Elizabeth; Kegan, Robert

    2003-08-01

    The ultimate success of recent medical curriculum reforms is, in large part, dependent upon the faculty's ability to adopt and sustain new attitudes and behaviors. However, like many New Year's resolutions, sincere intent to change may be short lived and followed by a discouraging return to old behaviors. Failure to sustain the initial resolve to change can be misinterpreted as a lack of commitment to one's original goals and eventually lead to greater effort expended in rationalizing the status quo rather than changing it. The present article outlines how a transformative process that has proven to be effective in managing personal change, Questioning the Big Assumptions, was successfully used in an international faculty development program for medical educators to enhance individual personal satisfaction and professional effectiveness. This process systematically encouraged participants to explore and proactively address currently operative mechanisms that could stall their attempts to change at the professional level. The applications of the Big Assumptions process in faculty development helped individuals to recognize and subsequently utilize unchallenged and deep rooted personal beliefs to overcome unconscious resistance to change. This approach systematically led participants away from circular griping about what was not right in their current situation to identifying the actions that they needed to take to realize their individual goals. By thoughtful testing of personal Big Assumptions, participants designed behavioral changes that could be broadly supported and, most importantly, sustained.

  18. Zonally averaged model of dynamics, chemistry and radiation for the atmosphere

    Science.gov (United States)

    Tung, K. K.

    1985-01-01

    A nongeostrophic theory of zonally averaged circulation is formulated using the nonlinear primitive equations on a sphere, taking advantage of the more direct relationship between the mean meridional circulation and diabatic heating rate which is available in isentropic coordinates. Possible differences between results of nongeostrophic theory and the commonly used geostrophic formulation are discussed concerning: (1) the role of eddy forcing of the diabatic circulation, and (2) the nonlinear nearly inviscid limit vs the geostrophic limit. Problems associated with the traditional Rossby number scaling in quasi-geostrophic formulations are pointed out and an alternate, more general scaling based on the smallness of mean meridional to zonal velocities for a rotating planet is suggested. Such a scaling recovers the geostrophic balanced wind relationship for the mean zonal flow but reveals that the mean meridional velocity is in general ageostrophic.

  19. Climate Change: Implications for the Assumptions, Goals and Methods of Urban Environmental Planning

    Directory of Open Access Journals (Sweden)

    Kristina Hill

    2016-12-01

    Full Text Available As a result of increasing awareness of the implications of global climate change, shifts are becoming necessary and apparent in the assumptions, concepts, goals and methods of urban environmental planning. This review will present the argument that these changes represent a genuine paradigm shift in urban environmental planning. Reflection and action to develop this paradigm shift is critical now and in the next decades, because environmental planning for cities will only become more urgent as we enter a new climate period. The concepts, methods and assumptions that urban environmental planners have relied on in previous decades to protect people, ecosystems and physical structures are inadequate if they do not explicitly account for a rapidly changing regional climate context, specifically from a hydrological and ecological perspective. The over-arching concept of spatial suitability that guided planning in most of the 20th century has already given way to concepts that address sustainability, recognizing the importance of temporality. Quite rapidly, the concept of sustainability has been replaced in many planning contexts by the priority of establishing resilience in the face of extreme disturbance events. Now even this concept of resilience is being incorporated into a novel concept of urban planning as a process of adaptation to permanent, incremental environmental changes. This adaptation concept recognizes the necessity for continued resilience to extreme events, while acknowledging that permanent changes are also occurring as a result of trends that have a clear direction over time, such as rising sea levels. Similarly, the methods of urban environmental planning have relied on statistical data about hydrological and ecological systems that will not adequately describe these systems under a new climate regime. These methods are beginning to be replaced by methods that make use of early warning systems for regime shifts, and process

  20. Scenario Analysis In The Calculation Of Investment Efficiency–The Problem Of Formulating Assumptions

    Directory of Open Access Journals (Sweden)

    Dittmann Iwona

    2015-09-01

    Full Text Available This article concerns the problem of formulating assumptions in scenario analysis for investments which consist of the renting out of an apartment. The article attempts to indicate the foundations for the formulation of assumptions on the basis of observed retrospective regularities. It includes theoretical considerations regarding scenario design, as well as the results of studies on the formulation, in the past, of quantities which determined or were likely to bring closer estimate the value of the individual explanatory variables for a chosen measure of investment profitability (MIRRFCFE. The dynamics of and correlation between the variables were studied. The research was based on quarterly data from local residential real estate markets in Poland (in the six largest cities in the years 2006 – 2014, as well as on data from the financial market.

  1. Common-Sense Chemistry: The Use of Assumptions and Heuristics in Problem Solving

    Science.gov (United States)

    Maeyer, Jenine Rachel

    2013-01-01

    Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build…

  2. IRT models with relaxed assumptions in eRm: A manual-like instruction

    Directory of Open Access Journals (Sweden)

    REINHOLD HATZINGER

    2009-03-01

    Full Text Available Linear logistic models with relaxed assumptions (LLRA as introduced by Fischer (1974 are a flexible tool for the measurement of change for dichotomous or polytomous responses. As opposed to the Rasch model, assumptions on dimensionality of items, their mutual dependencies and the distribution of the latent trait in the population of subjects are relaxed. Conditional maximum likelihood estimation allows for inference about treatment, covariate or trend effect parameters without taking the subjects' latent trait values into account. In this paper we will show how LLRAs based on the LLTM, LRSM and LPCM can be used to answer various questions about the measurement of change and how they can be fitted in R using the eRm package. A number of small didactic examples is provided that can easily be used as templates for real data sets. All datafiles used in this paper are available from http://eRm.R-Forge.R-project.org/

  3. Investigation of assumptions underlying current safety guidelines on EM-induced nerve stimulation

    Science.gov (United States)

    Neufeld, Esra; Vogiatzis Oikonomidis, Ioannis; Iacono, Maria Ida; Angelone, Leonardo M.; Kainz, Wolfgang; Kuster, Niels

    2016-06-01

    An intricate network of a variety of nerves is embedded within the complex anatomy of the human body. Although nerves are shielded from unwanted excitation, they can still be stimulated by external electromagnetic sources that induce strongly non-uniform field distributions. Current exposure safety standards designed to limit unwanted nerve stimulation are based on a series of explicit and implicit assumptions and simplifications. This paper demonstrates the applicability of functionalized anatomical phantoms with integrated coupled electromagnetic and neuronal dynamics solvers for investigating the impact of magnetic resonance exposure on nerve excitation within the full complexity of the human anatomy. The impact of neuronal dynamics models, temperature and local hot-spots, nerve trajectory and potential smoothing, anatomical inhomogeneity, and pulse duration on nerve stimulation was evaluated. As a result, multiple assumptions underlying current safety standards are questioned. It is demonstrated that coupled EM-neuronal dynamics modeling involving realistic anatomies is valuable to establish conservative safety criteria.

  4. Wartime Paris, cirrhosis mortality, and the ceteris paribus assumption.

    Science.gov (United States)

    Fillmore, Kaye Middleton; Roizen, Ron; Farrell, Michael; Kerr, William; Lemmens, Paul

    2002-07-01

    This article critiques the ceteris paribus assumption, which tacitly sustains the epidemiologic literature's inference that the sharp decline in cirrhosis mortality observed in Paris during the Second World War derived from a sharp constriction in wine consumption. Paris's wartime circumstances deviate substantially from the "all else being equal" assumption, and at least three other hypotheses for the cirrhosis decline may be contemplated. Historical and statistical review. Wartime Paris underwent tumultuous changes. Wine consumption did decline, but there were, as well, a myriad of other changes in diet and life experience, many involving new or heightened hardships, nutritional, experiential, institutional, health and mortality risks. Three competing hypotheses are presented: (1) A fraction of the candidates for cirrhosis mortality may have fallen to more sudden forms of death; (2) alcoholics, heavy drinkers and Paris's clochard subpopulation may have been differentially likely to become removed from the city's wartime population, whether by self-initiated departure, arrest and deportation, or death from other causes, even murder; and (3) there was mismeasurement in the cirrhosis mortality decline. The alcohol-cirrhosis connection provided the template for the alcohol research effort (now more than 20 years old) aimed at re-establishing scientific recognition of alcohol's direct alcohol-problems-generating associations and causal responsibilities. In a time given to reports of weaker associations of the alcohol-cirrhosis connection, the place and importance of the Paris curve in the wider literature, as regards that connection, remains. For this reason, the Paris findings should be subjected to as much research scrutiny as they undoubtedly deserve.

  5. Sensitivity of probabilistic MCO water content estimates to key assumptions

    International Nuclear Information System (INIS)

    DUNCAN, D.R.

    1999-01-01

    Sensitivity of probabilistic multi-canister overpack (MCO) water content estimates to key assumptions is evaluated with emphasis on the largest non-cladding film-contributors, water borne by particulates adhering to damage sites, and water borne by canister particulate. Calculations considered different choices of damage state degree of independence, different choices of percentile for reference high inputs, three types of input probability density function (pdfs): triangular, log-normal, and Weibull, and the number of scrap baskets in an MCO

  6. Evaluation of coat uniformity and taste-masking efficiency of irregular-shaped drug particles coated in a modified tangential spray fluidized bed processor.

    Science.gov (United States)

    Xu, Min; Heng, Paul Wan Sia; Liew, Celine Valeria

    2015-01-01

    To explore the feasibility of coating irregular-shaped drug particles in a modified tangential spray fluidized bed processor (FS processor) and evaluate the coated particles for their coat uniformity and taste-masking efficiency. Paracetamol particles were coated to 20%, w/w weight gain using a taste-masking polymer insoluble in neutral and basic pH but soluble in acidic pH. In-process samples (5, 10 and 15%, w/w coat) and the resultant coated particles (20%, w/w coat) were collected to monitor the changes in their physicochemical attributes. After coating to 20%, w/w coat weight gain, the usable yield was 81% with minimal agglomeration (coat compared with the uncoated particles. A 15%, w/w coat was optimal for inhibiting drug release in salivary pH with subsequent fast dissolution in simulated gastric pH. The FS processor shows promise for direct coating of irregular-shaped drug particles with wide size distribution. The coated particles with 15% coat were sufficiently taste masked and could be useful for further application in orally disintegrating tablet platforms.

  7. What Were We Thinking? Five Erroneous Assumptions That Have Fueled Specialized Interventions for Adolescents Who Have Sexually Offended

    Science.gov (United States)

    Worling, James R.

    2013-01-01

    Since the early 1980s, five assumptions have influenced the assessment, treatment, and community supervision of adolescents who have offended sexually. In particular, interventions with this population have been informed by the assumptions that these youth are (i) deviant, (ii) delinquent, (iii) disordered, (iv) deficit-ridden, and (v) deceitful.…

  8. The effects of behavioral and structural assumptions in artificial stock market

    Science.gov (United States)

    Liu, Xinghua; Gregor, Shirley; Yang, Jianmei

    2008-04-01

    Recent literature has developed the conjecture that important statistical features of stock price series, such as the fat tails phenomenon, may depend mainly on the market microstructure. This conjecture motivated us to investigate the roles of both the market microstructure and agent behavior with respect to high-frequency returns and daily returns. We developed two simple models to investigate this issue. The first one is a stochastic model with a clearing house microstructure and a population of zero-intelligence agents. The second one has more behavioral assumptions based on Minority Game and also has a clearing house microstructure. With the first model we found that a characteristic of the clearing house microstructure, namely the clearing frequency, can explain fat tail, excess volatility and autocorrelation phenomena of high-frequency returns. However, this feature does not cause the same phenomena in daily returns. So the Stylized Facts of daily returns depend mainly on the agents’ behavior. With the second model we investigated the effects of behavioral assumptions on daily returns. Our study implicates that the aspects which are responsible for generating the stylized facts of high-frequency returns and daily returns are different.

  9. Questioning the "big assumptions". Part II: recognizing organizational contradictions that impede institutional change.

    Science.gov (United States)

    Bowe, Constance M; Lahey, Lisa; Kegan, Robert; Armstrong, Elizabeth

    2003-08-01

    Well-designed medical curriculum reforms can fall short of their primary objectives during implementation when unanticipated or unaddressed organizational resistance surfaces. This typically occurs if the agents for change ignore faculty concerns during the planning stage or when the provision of essential institutional safeguards to support new behaviors are neglected. Disappointing outcomes in curriculum reforms then result in the perpetuation of or reversion to the status quo despite the loftiest of goals. Institutional resistance to change, much like that observed during personal development, does not necessarily indicate a communal lack of commitment to the organization's newly stated goals. It may reflect the existence of competing organizational objectives that must be addressed before substantive advances in a new direction can be accomplished. The authors describe how the Big Assumptions process (see previous article) was adapted and applied at the institutional level during a school of medicine's curriculum reform. Reform leaders encouraged faculty participants to articulate their reservations about considered changes to provided insights into the organization's competing commitments. The line of discussion provided an opportunity for faculty to appreciate the gridlock that existed until appropriate test of the school's long held Big Assumptions could be conducted. The Big Assumptions process proved useful in moving faculty groups to recognize and questions the validity of unchallenged institutional beliefs that were likely to undermine efforts toward change. The process also allowed the organization to put essential institutional safeguards in place that ultimately insured that substantive reforms could be sustained.

  10. Bayou Corne sinkhole : control measurements of State Highway 70 in Assumption Parish, Louisiana.

    Science.gov (United States)

    2014-01-01

    This project measured and assessed the surface stability of the portion of LA Highway 70 that is : potentially vulnerable to the Assumption Parish sinkhole. Using Global Positioning Systems (GPS) : enhanced by a real-time network (RTN) of continuousl...

  11. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    NARCIS (Netherlands)

    Ernst, Anja F.; Albers, Casper J.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated

  12. Under What Assumptions Do Site-by-Treatment Instruments Identify Average Causal Effects?

    Science.gov (United States)

    Reardon, Sean F.; Raudenbush, Stephen W.

    2013-01-01

    The increasing availability of data from multi-site randomized trials provides a potential opportunity to use instrumental variables methods to study the effects of multiple hypothesized mediators of the effect of a treatment. We derive nine assumptions needed to identify the effects of multiple mediators when using site-by-treatment interactions…

  13. Reflections on assumption of energetic politics. Viewpoint of a sceptial observer

    International Nuclear Information System (INIS)

    Taczanowski, S.; Pohorecki, W.

    2000-01-01

    The Polish assumptions of energetic politics up to 2020 have been critically assessed. Energy sources availability as well as predicted fuel prices have been discussed for interesting period. Fossil fuels and uranium have been taken into account. On the presented basis it has been concluded that rejection the nuclear option in Poland for energetics development plans up to 2020 seems to be a serious mistake

  14. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  15. Local characteristics of the nocturnal boundary layer in response to external pressure forcing

    NARCIS (Netherlands)

    van der Linden, S.J.A.; Baas, P.; van Hooft, J.A.; van Hooijdonk, I.G.S.; Bosveld, F.C.; van de Wiel, B.J.H.

    2017-01-01

    Geostrophic wind speed data, derived from pressure observations, are used in combination with tower measurements to investigate the nocturnal stable boundary layer at Cabauw (The Netherlands). Since the geostrophic wind speed is not directly influenced by local nocturnal stability, it may be

  16. Mesoscale process-induced variation of the West India coastal current during the winter monsoon

    Digital Repository Service at National Institute of Oceanography (India)

    Jineesh, V.K.; Muraleedharan, K.R.; Lix, J.K.; Revichandran, C.; HareeshKumar, P.V.; NaveenKumar, K.R.

    to an increase in sea level amplitude up to 28 cm Off southwest India, the poleward flow is along the western flank of this anticyclonic eddy and the geostrophic current completes the circulation around the eddy The eastward component of the geostrophic current...

  17. Association Between Tangential Beam Treatment Parameters and Cardiac Abnormalities After Definitive Radiation Treatment for Left-Sided Breast Cancer

    International Nuclear Information System (INIS)

    Correa, Candace R.; Das, Indra J.; Litt, Harold I.; Ferrari, Victor; Hwang, W.-T.; Solin, Lawrence J.; Harris, Eleanor E.

    2008-01-01

    Purpose: To examine the association between radiation treatment (RT) parameters, cardiac diagnostic test abnormalities, and clinical cardiovascular diagnoses among patients with left-sided breast cancer after breast conservation treatment with tangential beam RT. Methods and Materials: The medical records of 416 patients treated between 1977 and 1995 with RT for primary left-sided breast cancer were reviewed for myocardial perfusion imaging and echocardiograms. Sixty-two patients (62/416, 15%) underwent these cardiac diagnostic tests for cardiovascular symptoms and were selected for further study. Central lung distance and maximum heart width and length in the treatment field were determined for each patient. Medical records were reviewed for cardiovascular diagnoses and evaluation of cardiac risk factors. Results: At a median of 12 years post-RT the incidence of cardiac diagnostic test abnormalities among symptomatic left-sided irradiated women was significantly higher than the predicted incidence of cardiovascular disease in the patient population, 6/62 (9%) predicted vs. 24/62 (39%) observed, p 0.001. As compared with patients with normal tests, patients with cardiac diagnostic test abnormalities had a larger median central lung distance (2.6 cm vs. 2.2 cm, p = 0.01). Similarly, patients with vs. without congestive heart failure had a larger median central lung distance (2.8 cm vs. 2.3 cm, p = 0.008). Conclusions: Contemporary RT for early breast cancer may be associated with a small, but potentially avoidable, risk of cardiovascular morbidity that is associated with treatment technique

  18. Numerical investigation on the flow, combustion, and NOX emission characteristics in a 660 MWe tangential firing ultra-supercritical boiler

    Directory of Open Access Journals (Sweden)

    Wenjing Sun

    2016-02-01

    Full Text Available A three-dimensional numerical simulation was carried out to study the pulverized-coal combustion process in a tangentially fired ultra-supercritical boiler. The realizable k-ε model for gas coupled with discrete phase model for coal particles, P-1 radiation model for radiation, two-competing-rates model for devolatilization, and kinetics/diffusion-limited model for combustion process are considered. The characteristics of the flow field, particle motion, temperature distribution, species components, and NOx emissions were numerically investigated. The good agreement of the measurements and predictions implies that the applied simulation models are appropriate for modeling commercial-scale coal boilers. It is found that an ideal turbulent flow and particle trajectory can be observed in this unconventional pulverized-coal furnace. With the application of over-fire air and additional air, lean-oxygen combustion takes place near the burner sets region and higher temperature at furnace exit is acquired for better heat transfer. Within the limits of secondary air, more steady combustion process is achieved as well as the reduction of NOx. Furthermore, the influences of the secondary air, over-fire air, and additional air on the NOx emissions are obtained. The numerical results reveal that NOx formation attenuates with the decrease in the secondary air ratio (γ2nd and the ratio of the additional air to the over-fire air (γAA/γOFA was within the limits.

  19. Questioning the foundations of physics which of our fundamental assumptions are wrong?

    CERN Document Server

    Foster, Brendan; Merali, Zeeya

    2015-01-01

    The essays in this book look at way in which the fundaments of physics might need to be changed in order to make progress towards a unified theory. They are based on the prize-winning essays submitted to the FQXi essay competition “Which of Our Basic Physical Assumptions Are Wrong?”, which drew over 270 entries. As Nobel Laureate physicist Philip W. Anderson realized, the key to understanding nature’s reality is not anything “magical”, but the right attitude, “the focus on asking the right questions, the willingness to try (and to discard) unconventional answers, the sensitive ear for phoniness, self-deception, bombast, and conventional but unproven assumptions.” The authors of the eighteen prize-winning essays have, where necessary, adapted their essays for the present volume so as to (a) incorporate the community feedback generated in the online discussion of the essays, (b) add new material that has come to light since their completion and (c) to ensure accessibility to a broad audience of re...

  20. Agenda dissonance: immigrant Hispanic women's and providers' assumptions and expectations for menopause healthcare.

    Science.gov (United States)

    Esposito, Noreen

    2005-02-01

    This focus group study examined immigrant Hispanic women's and providers' assumptions about and expectations of healthcare encounters in the context of menopause. Four groups of immigrant women from Central America and one group of healthcare providers were interviewed in Spanish and English, respectively. The women wanted provider-initiated, individualized anticipatory guidance about menopause, acknowledgement of their symptoms, and mainstream medical treatment for disruptive symptoms. Providers believed that menopause was an unimportant health issue for immigrant women and was overshadowed by concerns about high-risk medical problems, such as diabetes, heart disease and HIV prevention. The women expected a healthcare encounter to be patient centered, social, and complete in itself. Providers expected an encounter to be businesslike and one part of multiple visit care. Language and lack of time were barriers cited by all. Dissonance between patient-provider assumptions and expectations around issues of healthcare leads to missed opportunities for care.

  1. Temporal Distinctiveness in Task Switching: Assessing the Mixture-Distribution Assumption

    Directory of Open Access Journals (Sweden)

    James A Grange

    2016-02-01

    Full Text Available In task switching, increasing the response--cue interval has been shown to reduce the switch cost. This has been attributed to a time-based decay process influencing the activation of memory representations of tasks (task-sets. Recently, an alternative account based on interference rather than decay has been successfully applied to this data (Horoufchin et al., 2011. In this account, variation of the RCI is thought to influence the temporal distinctiveness (TD of episodic traces in memory, thus affecting their retrieval probability. This can affect performance as retrieval probability influences response time: If retrieval succeeds, responding is fast due to positive priming; if retrieval fails, responding is slow, due to having to perform the task via a slow algorithmic process. This account---and a recent formal model (Grange & Cross, 2015---makes the strong prediction that all RTs are a mixture of one of two processes: a fast process when retrieval succeeds, and a slow process when retrieval fails. The present paper assesses the evidence for this mixture-distribution assumption in TD data. In a first section, statistical evidence for mixture-distributions is found using the fixed-point property test. In a second section, a mathematical process model with mixture-distributions at its core is fitted to the response time distribution data. Both approaches provide good evidence in support of the mixture-distribution assumption, and thus support temporal distinctiveness accounts of the data.

  2. An optical flow algorithm based on gradient constancy assumption for PIV image processing

    International Nuclear Information System (INIS)

    Zhong, Qianglong; Yang, Hua; Yin, Zhouping

    2017-01-01

    Particle image velocimetry (PIV) has matured as a flow measurement technique. It enables the description of the instantaneous velocity field of the flow by analyzing the particle motion obtained from digitally recorded images. Correlation based PIV evaluation technique is widely used because of its good accuracy and robustness. Although very successful, correlation PIV technique has some weakness which can be avoided by optical flow based PIV algorithms. At present, most of the optical flow methods applied to PIV are based on brightness constancy assumption. However, some factors of flow imaging technology and the nature property of the fluids make the brightness constancy assumption less appropriate in real PIV cases. In this paper, an implementation of a 2D optical flow algorithm (GCOF) based on gradient constancy assumption is introduced. The proposed GCOF assumes the edges of the illuminated PIV particles are constant during motion. It comprises two terms: a combined local-global gradient data term and a first-order divergence and vorticity smooth term. The approach can provide accurate dense motion fields. The approach are tested on synthetic images and on two experimental flows. The comparison of GCOF with other optical flow algorithms indicates the proposed method is more accurate especially in conditions of illumination variation. The comparison of GCOF with correlation PIV technique shows that the proposed GCOF has advantages on preserving small divergence and vorticity structures of the motion field and getting less outliers. As a consequence, the GCOF acquire a more accurate and better topological description of the turbulent flow. (paper)

  3. Evaluating methodological assumptions of a catch-curve survival estimation of unmarked precocial shorebird chickes

    Science.gov (United States)

    McGowan, Conor P.; Gardner, Beth

    2013-01-01

    Estimating productivity for precocial species can be difficult because young birds leave their nest within hours or days of hatching and detectability thereafter can be very low. Recently, a method for using a modified catch-curve to estimate precocial chick daily survival for age based count data was presented using Piping Plover (Charadrius melodus) data from the Missouri River. However, many of the assumptions of the catch-curve approach were not fully evaluated for precocial chicks. We developed a simulation model to mimic Piping Plovers, a fairly representative shorebird, and age-based count-data collection. Using the simulated data, we calculated daily survival estimates and compared them with the known daily survival rates from the simulation model. We conducted these comparisons under different sampling scenarios where the ecological and statistical assumptions had been violated. Overall, the daily survival estimates calculated from the simulated data corresponded well with true survival rates of the simulation. Violating the accurate aging and the independence assumptions did not result in biased daily survival estimates, whereas unequal detection for younger or older birds and violating the birth death equilibrium did result in estimator bias. Assuring that all ages are equally detectable and timing data collection to approximately meet the birth death equilibrium are key to the successful use of this method for precocial shorebirds.

  4. Treatment of chronic heel osteomyelitis in vasculopathic patients. Can the combined use of Integra® , skin graft and negative pressure wound therapy be considered a valid therapeutic approach after partial tangential calcanectomy?

    Science.gov (United States)

    Fraccalvieri, Marco; Pristerà, Giuseppe; Zingarelli, Enrico; Ruka, Erind; Bruschi, Stefano

    2012-04-01

    Osteomyelitis of the calcaneus is a difficult problem to manage. Patients affected by osteomyelitis of the calcaneus often have a below-the-knee amputation because of their comorbidity. In this article, we present seven cases of heel ulcerations with chronic osteomyelitis treated with Integra(®) Dermal Regeneration Template, skin graft and negative pressure wound therapy after partial tangential calcanectomy, discussing the surgical and functional results. In this casuistic of patients, all wounds healed after skin grating of the neodermis generated by Integra(®), with no patient requiring a below-knee amputation. © 2011 The Authors. © 2011 Blackwell Publishing Ltd and Medicalhelplines.com Inc.

  5. Vocational Didactics: Core Assumptions and Approaches from Denmark, Germany, Norway, Spain and Sweden

    Science.gov (United States)

    Gessler, Michael; Moreno Herrera, Lázaro

    2015-01-01

    The design of vocational didactics has to meet special requirements. Six core assumptions are identified: outcome orientation, cultural-historical embedding, horizontal structure, vertical structure, temporal structure, and the changing nature of work. Different approaches and discussions from school-based systems (Spain and Sweden) and dual…

  6. Anti-Atheist Bias in the United States: Testing Two Critical Assumptions

    OpenAIRE

    Swan, Lawton K; Heesacker, Martin

    2012-01-01

    Decades of opinion polling and empirical investigations have clearly demonstrated a pervasive anti-atheist prejudice in the United States. However, much of this scholarship relies on two critical and largely unaddressed assumptions: (a) that when people report negative attitudes toward atheists, they do so because they are reacting specifically to their lack of belief in God; and (b) that survey questions asking about attitudes toward atheists as a group yield reliable information about biase...

  7. Effective Schroedinger equations on submanifolds

    Energy Technology Data Exchange (ETDEWEB)

    Wachsmuth, Jakob

    2010-02-11

    In this thesis the time dependent Schroedinger equation is considered on a Riemannian manifold A with a potential that localizes a certain class of states close to a fixed submanifold C, the constraint manifold. When the potential is scaled in the directions normal to C by a small parameter epsilon, the solutions concentrate in an epsilon-neighborhood of the submanifold. An effective Schroedinger equation on the submanifold C is derived and it is shown that its solutions, suitably lifted to A, approximate the solutions of the original equation on A up to errors of order {epsilon}{sup 3} vertical stroke t vertical stroke at time t. Furthermore, it is proved that, under reasonable conditions, the eigenvalues of the corresponding Hamiltonians below a certain energy coincide upto errors of order {epsilon}{sup 3}. These results holds in the situation where tangential and normal energies are of the same order, and where exchange between normal and tangential energies occurs. In earlier results tangential energies were assumed to be small compared to normal energies, and rather restrictive assumptions were needed, to ensure that the separation of energies is maintained during the time evolution. The most important consequence of this thesis is that now constraining potentials that change their shape along the submanifold can be treated, which is the typical situation in applications like molecular dynamics and quantum waveguides.

  8. Proposed optical test of Bell's inequalities not resting upon the fair sampling assumption

    International Nuclear Information System (INIS)

    Santos, Emilio

    2004-01-01

    Arguments are given against the fair sampling assumption, used to claim an empirical disproof of local realism. New tests are proposed, able to discriminate between quantum mechanics and a restricted, but appealing, family of local hidden-variables models. Such tests require detectors with efficiencies just above 20%

  9. Post Stereotypes: Deconstructing Racial Assumptions and Biases through Visual Culture and Confrontational Pedagogy

    Science.gov (United States)

    Jung, Yuha

    2015-01-01

    The Post Stereotypes project embodies confrontational pedagogy and involves postcard artmaking designed to both solicit expression of and deconstruct students' racial, ethnic, and cultural stereotypes and assumptions. As part of the Cultural Diversity in American Art course, students created postcard art that visually represented their personal…

  10. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Gernaey, Krist V.; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant...

  11. 76 FR 81966 - Agency Information Collection Activities; Proposed Collection; Comments Requested; Assumption of...

    Science.gov (United States)

    2011-12-29

    ... Indian country is subject to State criminal jurisdiction under Public Law 280 (18 U.S.C. 1162(a)) to... Collection; Comments Requested; Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian Country ACTION: 60-Day notice of information collection under review. The Department of Justice...

  12. CHILDREN'S EDUCATION IN THE REGULAR NATIONAL BASIS: ASSUMPTIONS AND INTERFACES WITH PHYSICAL EDUCATION

    Directory of Open Access Journals (Sweden)

    André da Silva Mello

    2016-09-01

    Full Text Available This paper aims at discussing the Children's Education organization within the Regular Curricular National Basis (BNCC, focusing on the permanencies and advances taking in relation to the precedent documents, and analyzing the presence of Physical Education in Children's Education from the assumptions that guide the Base, in interface with researches about pedagogical experiences with this field of knowledge. To do so, it carries out a documental-bibliographic analysis, using as sources the BNCC, the National Curricular Referential for Children's Education, the National Curricular Guidelines for Children's Education and academic-scientific productions belonging to the Physical Education area that approach Children's Education. In the analysis process, the work establishes categories which allow the interlocution among different sources used in this study. Data analyzed offers indications that the assumption present in the BNCC dialogue, not explicitly, with the movements of the curricular component and with the Physical Education academic-scientific production regarding Children's Education.

  13. Ancestral assumptions and the clinical uncertainty of evolutionary medicine.

    Science.gov (United States)

    Cournoyea, Michael

    2013-01-01

    Evolutionary medicine is an emerging field of medical studies that uses evolutionary theory to explain the ultimate causes of health and disease. Educational tools, online courses, and medical school modules are being developed to help clinicians and students reconceptualize health and illness in light of our evolutionary past. Yet clinical guidelines based on our ancient life histories are epistemically weak, relying on the controversial assumptions of adaptationism and advocating a strictly biophysical account of health. To fulfill the interventionist goals of clinical practice, it seems that proximate explanations are all we need to develop successful diagnostic and therapeutic guidelines. Considering these epistemic concerns, this article argues that the clinical relevance of evolutionary medicine remains uncertain at best.

  14. Polarized BRDF for coatings based on three-component assumption

    Science.gov (United States)

    Liu, Hong; Zhu, Jingping; Wang, Kai; Xu, Rong

    2017-02-01

    A pBRDF(polarized bidirectional reflection distribution function) model for coatings is given based on three-component reflection assumption in order to improve the polarized scattering simulation capability for space objects. In this model, the specular reflection is given based on microfacet theory, the multiple reflection and volume scattering are given separately according to experimental results. The polarization of specular reflection is considered from Fresnel's law, and both multiple reflection and volume scattering are assumed depolarized. Simulation and measurement results of two satellite coating samples SR107 and S781 are given to validate that the pBRDF modeling accuracy can be significantly improved by the three-component model given in this paper.

  15. Dynamic Group Diffie-Hellman Key Exchange under standard assumptions

    International Nuclear Information System (INIS)

    Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David

    2002-01-01

    Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model

  16. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus.

    Science.gov (United States)

    Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel

    2017-10-01

    The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  17. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus

    Directory of Open Access Journals (Sweden)

    Constantinos Taliotis

    2017-10-01

    Full Text Available The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  18. Investigating Teachers’ and Students’ Beliefs and Assumptions about CALL Programme at Caledonian College of Engineering

    Directory of Open Access Journals (Sweden)

    Holi Ibrahim Holi Ali

    2012-01-01

    Full Text Available This study is set to investigate students’ and teachers’ perceptions and assumptions about newly implemented CALL Programme at the School of Foundation Studies, Caledonian College of Engineering, Oman. Two versions of questionnaire were administered to 24 teachers and 90 students to collect their beliefs and assumption about CALL programame. The results shows that the great majority of the students report that CALL is very interesting, motivating and useful to them and they learn a lot form it. However, the number of CALL hours should be increased, lab should be equipped and arranged in user friendly way, assessment should be integrated into CALL, and smart boards, black boards should be incorporated into the programme.

  19. Towards New Probabilistic Assumptions in Business Intelligence

    Directory of Open Access Journals (Sweden)

    Schumann Andrew

    2015-01-01

    Full Text Available One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot be observable and additive in principle. These variables can be called symbolic values or symbolic meanings and studied within symbolic interactionism, the theory developed since George Herbert Mead and Herbert Blumer. In statistical and econometric tools of business intelligence we accept only phenomena with causal connections measured by additive measures. In the paper we show that in the social world we deal with symbolic interactions which can be studied by non-additive labels (symbolic meanings or symbolic values. For accepting the variety of such phenomena we should avoid additivity of basic labels and construct a new probabilistic method in business intelligence based on non-Archimedean probabilities.

  20. Assumptions for the Annual Energy Outlook 1993

    International Nuclear Information System (INIS)

    1993-01-01

    This report is an auxiliary document to the Annual Energy Outlook 1993 (AEO) (DOE/EIA-0383(93)). It presents a detailed discussion of the assumptions underlying the forecasts in the AEO. The energy modeling system is an economic equilibrium system, with component demand modules representing end-use energy consumption by major end-use sector. Another set of modules represents petroleum, natural gas, coal, and electricity supply patterns and pricing. A separate module generates annual forecasts of important macroeconomic and industrial output variables. Interactions among these components of energy markets generate projections of prices and quantities for which energy supply equals energy demand. This equilibrium modeling system is referred to as the Intermediate Future Forecasting System (IFFS). The supply models in IFFS for oil, coal, natural gas, and electricity determine supply and price for each fuel depending upon consumption levels, while the demand models determine consumption depending upon end-use price. IFFS solves for market equilibrium for each fuel by balancing supply and demand to produce an energy balance in each forecast year

  1. 26 CFR 1.752-6 - Partnership assumption of partner's section 358(h)(3) liability after October 18, 1999, and...

    Science.gov (United States)

    2010-04-01

    ... general. If, in a transaction described in section 721(a), a partnership assumes a liability (defined in...) does not apply to an assumption of a liability (defined in section 358(h)(3)) by a partnership as part... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Partnership assumption of partner's section 358...

  2. Assessing the skill of hydrology models at simulating the water cycle in the HJ Andrews LTER: Assumptions, strengths and weaknesses

    Science.gov (United States)

    Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...

  3. Prediction of unburned carbon and NOx in a tangentially fired power station using single coals and blends

    Energy Technology Data Exchange (ETDEWEB)

    R.I. Backreedy; J.M. Jones; L. Ma; M. Pourkashanian; A. Williams; A. Arenillas; B. Arias; J.J. Pis; F. Rubiera [University of Leeds, Leeds (United Kingdom). Energy and Resources Research Institute

    2005-12-01

    Two approaches can be employed for prediction of NOx and unburned carbon. The first approach uses global models such as the 'slice' model which requires the combustor reaction conditions as an input but which has a detailed coal combustion mechanism. The second involves a computational fluid dynamic model that in principle can give detailed information about all aspects of combustion, but usually is restricted in the detail of the combustion model because of the heavy computational demands. The slice model approach can be seen to be complimentary to the CFD approach since the NOx and carbon burnout is computed using the slice model as a post-processor to the CFD model computation. The slice model that has been used previously by our group is applied to a commercial tangentially fired combustor operated in Spain and using a range of Spanish coals and imported coals, some of which are fired as blends. The computed results are compared with experimental measurements, and the accuracy of the approach assessed. The CFD model applied to this case is one of the commercial codes modified to use a number of coal combustion sub-models developed by our group. In particular it can use two independent streams of coal and as such it can be used for the combustion of coal blends. The results show that both model approaches can give good predictions of the NOx and carbon in ash despite the fact that certain parts of the coal combustion models are not exactly the same. However, if a detailed insight into the combustor behaviour is required then the CFD model must be used. 28 refs., 4 figs., 6 tabs.

  4. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    Directory of Open Access Journals (Sweden)

    Elżbieta Sandurska

    2016-12-01

    Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.

  5. Is a "Complex" Task Really Complex? Validating the Assumption of Cognitive Task Complexity

    Science.gov (United States)

    Sasayama, Shoko

    2016-01-01

    In research on task-based learning and teaching, it has traditionally been assumed that differing degrees of cognitive task complexity can be inferred through task design and/or observations of differing qualities in linguistic production elicited by second language (L2) communication tasks. Without validating this assumption, however, it is…

  6. Estimating Risks and Relative Risks in Case-Base Studies under the Assumptions of Gene-Environment Independence and Hardy-Weinberg Equilibrium

    Science.gov (United States)

    Chui, Tina Tsz-Ting; Lee, Wen-Chung

    2014-01-01

    Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption. PMID:25137392

  7. Estimating risks and relative risks in case-base studies under the assumptions of gene-environment independence and Hardy-Weinberg equilibrium.

    Directory of Open Access Journals (Sweden)

    Tina Tsz-Ting Chui

    Full Text Available Many diseases result from the interactions between genes and the environment. An efficient method has been proposed for a case-control study to estimate the genetic and environmental main effects and their interactions, which exploits the assumptions of gene-environment independence and Hardy-Weinberg equilibrium. To estimate the absolute and relative risks, one needs to resort to an alternative design: the case-base study. In this paper, the authors show how to analyze a case-base study under the above dual assumptions. This approach is based on a conditional logistic regression of case-counterfactual controls matched data. It can be easily fitted with readily available statistical packages. When the dual assumptions are met, the method is approximately unbiased and has adequate coverage probabilities for confidence intervals. It also results in smaller variances and shorter confidence intervals as compared with a previous method for a case-base study which imposes neither assumption.

  8. Testing the basic assumption of the hydrogeomorphic approach to assessing wetland functions.

    Science.gov (United States)

    Hruby, T

    2001-05-01

    The hydrogeomorphic (HGM) approach for developing "rapid" wetland function assessment methods stipulates that the variables used are to be scaled based on data collected at sites judged to be the best at performing the wetland functions (reference standard sites). A critical step in the process is to choose the least altered wetlands in a hydrogeomorphic subclass to use as a reference standard against which other wetlands are compared. The basic assumption made in this approach is that wetlands judged to have had the least human impact have the highest level of sustainable performance for all functions. The levels at which functions are performed in these least altered wetlands are assumed to be "characteristic" for the subclass and "sustainable." Results from data collected in wetlands in the lowlands of western Washington suggest that the assumption may not be appropriate for this region. Teams developing methods for assessing wetland functions did not find that the least altered wetlands in a subclass had a range of performance levels that could be identified as "characteristic" or "sustainable." Forty-four wetlands in four hydrogeomorphic subclasses (two depressional subclasses and two riverine subclasses) were rated by teams of experts on the severity of their human alterations and on the level of performance of 15 wetland functions. An ordinal scale of 1-5 was used to quantify alterations in water regime, soils, vegetation, buffers, and contributing basin. Performance of functions was judged on an ordinal scale of 1-7. Relatively unaltered wetlands were judged to perform individual functions at levels that spanned all of the seven possible ratings in all four subclasses. The basic assumption of the HGM approach, that the least altered wetlands represent "characteristic" and "sustainable" levels of functioning that are different from those found in altered wetlands, was not confirmed. Although the intent of the HGM approach is to use level of functioning as a

  9. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    Science.gov (United States)

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  10. A simulation study to compare three self-controlled case series approaches: correction for violation of assumption and evaluation of bias.

    Science.gov (United States)

    Hua, Wei; Sun, Guoying; Dodd, Caitlin N; Romio, Silvana A; Whitaker, Heather J; Izurieta, Hector S; Black, Steven; Sturkenboom, Miriam C J M; Davis, Robert L; Deceuninck, Genevieve; Andrews, N J

    2013-08-01

    The assumption that the occurrence of outcome event must not alter subsequent exposure probability is critical for preserving the validity of the self-controlled case series (SCCS) method. This assumption is violated in scenarios in which the event constitutes a contraindication for exposure. In this simulation study, we compared the performance of the standard SCCS approach and two alternative approaches when the event-independent exposure assumption was violated. Using the 2009 H1N1 and seasonal influenza vaccines and Guillain-Barré syndrome as a model, we simulated a scenario in which an individual may encounter multiple unordered exposures and each exposure may be contraindicated by the occurrence of outcome event. The degree of contraindication was varied at 0%, 50%, and 100%. The first alternative approach used only cases occurring after exposure with follow-up time starting from exposure. The second used a pseudo-likelihood method. When the event-independent exposure assumption was satisfied, the standard SCCS approach produced nearly unbiased relative incidence estimates. When this assumption was partially or completely violated, two alternative SCCS approaches could be used. While the post-exposure cases only approach could handle only one exposure, the pseudo-likelihood approach was able to correct bias for both exposures. Violation of the event-independent exposure assumption leads to an overestimation of relative incidence which could be corrected by alternative SCCS approaches. In multiple exposure situations, the pseudo-likelihood approach is optimal; the post-exposure cases only approach is limited in handling a second exposure and may introduce additional bias, thus should be used with caution. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Teaching and Learning Science in the 21st Century: Challenging Critical Assumptions in Post-Secondary Science

    Directory of Open Access Journals (Sweden)

    Amanda L. Glaze

    2018-01-01

    Full Text Available It is widely agreed upon that the goal of science education is building a scientifically literate society. Although there are a range of definitions for science literacy, most involve an ability to problem solve, make evidence-based decisions, and evaluate information in a manner that is logical. Unfortunately, science literacy appears to be an area where we struggle across levels of study, including with students who are majoring in the sciences in university settings. One reason for this problem is that we have opted to continue to approach teaching science in a way that fails to consider the critical assumptions that faculties in the sciences bring into the classroom. These assumptions include expectations of what students should know before entering given courses, whose responsibility it is to ensure that students entering courses understand basic scientific concepts, the roles of researchers and teachers, and approaches to teaching at the university level. Acknowledging these assumptions and the potential for action to shift our teaching and thinking about post-secondary education represents a transformative area in science literacy and preparation for the future of science as a field.

  12. Testing legal assumptions regarding the effects of dancer nudity and proximity to patron on erotic expression.

    Science.gov (United States)

    Linz, D; Blumenthal, E; Donnerstein, E; Kunkel, D; Shafer, B J; Lichtenstein, A

    2000-10-01

    A field experiment was conducted in order to test the assumptions by the Supreme Court in Barnes v. Glen Theatre, Inc. (1991) and the Ninth Circuit Court of Appeals in Colacurcio v. City of Kent (1999) that government restrictions on dancer nudity and dancer-patron proximity do not affect the content of messages conveyed by erotic dancers. A field experiment was conducted in which dancer nudity (nude vs. partial clothing) and dancer-patron proximity (4 feet; 6 in.; 6 in. plus touch) were manipulated under controlled conditions in an adult night club. After male patrons viewed the dances, they completed questionnaires assessing affective states and reception of erotic, relational intimacy, and social messages. Contrary to the assumptions of the courts, the results showed that the content of messages conveyed by the dancers was significantly altered by restrictions placed on dancer nudity and dancer-patron proximity. These findings are interpreted in terms of social psychological responses to nudity and communication theories of nonverbal behavior. The legal implications of rejecting the assumptions made by the courts in light of the findings of this study are discussed. Finally, suggestions are made for future research.

  13. On the Validity of the “Thin” and “Thick” Double-Layer Assumptions When Calculating Streaming Currents in Porous Media

    Directory of Open Access Journals (Sweden)

    Matthew D. Jackson

    2012-01-01

    Full Text Available We find that the thin double layer assumption, in which the thickness of the electrical diffuse layer is assumed small compared to the radius of curvature of a pore or throat, is valid in a capillary tubes model so long as the capillary radius is >200 times the double layer thickness, while the thick double layer assumption, in which the diffuse layer is assumed to extend across the entire pore or throat, is valid so long as the capillary radius is >6 times smaller than the double layer thickness. At low surface charge density (0.5 M the validity criteria are less stringent. Our results suggest that the thin double layer assumption is valid in sandstones at low specific surface charge (<10 mC⋅m−2, but may not be valid in sandstones of moderate- to small pore-throat size at higher surface charge if the brine concentration is low (<0.001 M. The thick double layer assumption is likely to be valid in mudstones at low brine concentration (<0.1 M and surface charge (<10 mC⋅m−2, but at higher surface charge, it is likely to be valid only at low brine concentration (<0.003 M. Consequently, neither assumption may be valid in mudstones saturated with natural brines.

  14. The Avalanche Hypothesis and Compression of Morbidity: Testing Assumptions through Cohort-Sequential Analysis.

    Directory of Open Access Journals (Sweden)

    Jordan Silberman

    Full Text Available The compression of morbidity model posits a breakpoint in the adult lifespan that separates an initial period of relative health from a subsequent period of ever increasing morbidity. Researchers often assume that such a breakpoint exists; however, this assumption is hitherto untested.To test the assumption that a breakpoint exists--which we term a morbidity tipping point--separating a period of relative health from a subsequent deterioration in health status. An analogous tipping point for healthcare costs was also investigated.Four years of adults' (N = 55,550 morbidity and costs data were retrospectively analyzed. Data were collected in Pittsburgh, PA between 2006 and 2009; analyses were performed in Rochester, NY and Ann Arbor, MI in 2012 and 2013. Cohort-sequential and hockey stick regression models were used to characterize long-term trajectories and tipping points, respectively, for both morbidity and costs.Morbidity increased exponentially with age (P<.001. A morbidity tipping point was observed at age 45.5 (95% CI, 41.3-49.7. An exponential trajectory was also observed for costs (P<.001, with a costs tipping point occurring at age 39.5 (95% CI, 32.4-46.6. Following their respective tipping points, both morbidity and costs increased substantially (Ps<.001.Findings support the existence of a morbidity tipping point, confirming an important but untested assumption. This tipping point, however, may occur earlier in the lifespan than is widely assumed. An "avalanche of morbidity" occurred after the morbidity tipping point-an ever increasing rate of morbidity progression. For costs, an analogous tipping point and "avalanche" were observed. The time point at which costs began to increase substantially occurred approximately 6 years before health status began to deteriorate.

  15. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Directory of Open Access Journals (Sweden)

    Giordano James

    2010-01-01

    Full Text Available Abstract A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order, and what counts as abnormality (i.e.- disorder. The distinction(s between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice.

  16. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Science.gov (United States)

    2010-01-01

    A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order), and what counts as abnormality (i.e.- disorder). The distinction(s) between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice. PMID:20109176

  17. Testing Mean Differences among Groups: Multivariate and Repeated Measures Analysis with Minimal Assumptions.

    Science.gov (United States)

    Bathke, Arne C; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne

    2018-03-22

    To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer's disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved.

  18. About the inclusion of eddy currents in micromagnetic computations

    International Nuclear Information System (INIS)

    Torres, L.; Martinez, E.; Lopez-Diaz, L.; Alejos, O.

    2004-01-01

    A three-dimensional dynamic micromagnetic model including the effect of eddy currents and its application to magnetization reversal processes in permalloy nanostructures is presented. Model assumptions are tangential current on the nanostructure surface, electrical neutrality and negligible displacement current. The method for solving Landau Lifschitz Gilbert equation coupled to Maxwell equations incorporating the Faraday's law is discussed in detail. The results presented for Permalloy nanocubes of 40 nm side show how the effect of eddy currents can anticipate the magnetization switching. The dependence of the calculations on computational cell size is also reported

  19. Rethinking The Going Concern Assumption As A Pre-Condition For Accounting Measurement

    OpenAIRE

    Saratiel Wedzerai Musvoto; Daan G Gouws

    2011-01-01

    This study compares the principles of the going concern concept against the principles of representational measurement to determine if it is possible to establish foundations of accounting measurement with the going concern concept as a precondition. Representational measurement theory is a theory that establishes measurement in social scientific disciplines such as accounting. The going concern assumption is prescribed as one of the preconditions for measuring the attributes of the elements ...

  20. SU-F-J-55: Feasibility of Supraclavicular Field Treatment by Investigating Variation of Junction Position Between Breast Tangential and Supraclavicular Fields for Deep Inspiration Breath Hold (DIBH) Left Breast Radiation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, H; Sarkar, V; Paxton, A; Rassiah-Szegedi, P; Huang, Y; Szegedi, M; Huang, L; Su, F; Salter, B [University Utah, Salt Lake City, UT (United States)

    2016-06-15

    Purpose: To explore the feasibility of supraclavicular field treatment by investigating the variation of junction position between tangential and supraclavicular fields during left breast radiation using DIBH technique. Methods: Six patients with left breast cancer treated using DIBH technique were included in this study. AlignRT system was used to track patient’s breast surface. During daily treatment, when the patient’s DIBH reached preset AlignRT tolerance of ±3mm for all principle directions (vertical, longitudinal, and lateral), the remaining longitudinal offset was recorded. The average with standard-deviation and the range of daily longitudinal offset for the entire treatment course were calculated for all six patients (93 fractions totally). The ranges of average ± 1σ and 2σ were calculated, and they represent longitudinal field edge error with the confidence level of 68% and 95%. Based on these longitudinal errors, dose at junction between breast tangential and supraclavicular fields with variable gap/overlap sizes was calculated as a percentage of prescription (on a representative patient treatment plan). Results: The average of longitudinal offset for all patients is 0.16±1.32mm, and the range of longitudinal offset is −2.6 to 2.6mm. The range of longitudinal field edge error at 68% confidence level is −1.48 to 1.16mm, and at 95% confidence level is −2.80 to 2.48mm. With a 5mm and 1mm gap, the junction dose could be as low as 37.5% and 84.9% of prescription dose; with a 5mm and 1mm overlap, the junction dose could be as high as 169.3% and 117.6%. Conclusion: We observed longitudinal field edge error at 95% confidence level is about ±2.5mm, and the junction dose could reach 70% hot/cold between different DIBH. However, over the entire course of treatment, the average junction variation for all patients is within 0.2mm. The results from our study shows it is potentially feasible to treat supraclavicular field with breast tangents.

  1. “Marginal land” for energy crops: Exploring definitions and embedded assumptions

    International Nuclear Information System (INIS)

    Shortall, O.K.

    2013-01-01

    The idea of using less productive or “marginal land” for energy crops is promoted as a way to overcome the previous land use controversies faced by biofuels. It is argued that marginal land use would not compete with food production, is widely available and would incur fewer environmental impacts. This term is notoriously vague however, as are the details of how marginal land use for energy crops would work in practice. This paper explores definitions of the term “marginal land” in academic, consultancy, NGO, government and industry documents in the UK. It identifies three separate definitions of the term: land unsuitable for food production; ambiguous lower quality land; and economically marginal land. It probes these definitions further by exploring the technical, normative and political assumptions embedded within them. It finds that the first two definitions are normatively motivated: this land should be used to overcome controversies and the latter definition is predictive: this land is likely to be used. It is important that the different advantages, disadvantages and implications of the definitions are spelled out so definitions are not conflated to create unrealistic expectations about the role of marginal land in overcoming biofuels land use controversies. -- Highlights: •Qualitative methods were used to explore definitions of the term “marginal land”. •Three definitions were identified. •Two definitions focus on overcoming biomass land use controversies. •One definition predicts what land will be used for growing biomass. •Definitions contain problematic assumptions

  2. Study on Computerized Treatment Plan of Field-in-Field Intensity Modulated Radiation Therapy and Conventional Radiation Therapy according to PBC Algorithm and AAA on Breast Cancer Tangential Beam

    International Nuclear Information System (INIS)

    Yeom, Mi Suk; Bae, Seong Soo; Kim, Dae Sup; Back, Geum Mun

    2012-01-01

    Anisotropic Analytical Algorithm (AAA) provides more accurate dose calculation regarding impact on scatter and tissue inhomogeneity in comparison to Pencil Beam Convolution (PBC) algorithm. This study tries to analyze the difference of dose distribution according to PBC algorithm and dose calculation algorithm of AAA on breast cancer tangential plan. Computerized medical care plan using Eclipse treatment planning system (version 8.9, VARIAN, USA) has been established for the 10 breast cancer patients using 6 MV energy of Linac (CL-6EX, VARIAN, USA). After treatment plan of Conventional Radiation Therapy plan (Conventional plan) and Field-in-Field Intensity Modulated Radiation Therapy plan (FiF plan) using PBC algorithm has been established, MU has been fixed, implemented dose calculation after changing it to AAA, and compared and analyzed treatment plan using Dose Volume Histogram (DVH). Firstly, as a result of evaluating PBC algorithm of Conventional plan and the difference according to AAA, the average difference of CI value on target volume has been highly estimated by 0.295 on PBC algorithm and as a result of evaluating dose of lung, V 47 Gy and has been highly evaluated by 5.83% and 4.04% each, Mean dose, V 20 , V 5 , V 3 Gy has been highly evaluated 0.6%, 0.29%, 6.35%, 10.23% each on AAA. Secondly, in case of FiF plan, the average difference of CI value on target volume has been highly evaluated on PBC algorithm by 0.165, and dose on ipsilateral lung, V 47 , V 45 Gy, Mean dose has been highly evaluated 6.17%, 3.80%, 0.15% each on PBC algorithm, V 20 , V 5 , V 3 Gy has been highly evaluated 0.14%, 4.07%, 4.35% each on AAA. When calculating with AAA on breast cancer tangential plan, compared to PBC algorithm, Conformity on target volume of Conventional plan, FiF plan has been less evaluated by 0.295, 0.165 each. For the reason that dose of high dose region of ipsilateral lung has been showed little amount, and dose of low dose region has been showed much amount

  3. Bayou Corne sinkhole : control measurements of State Highway 70 in Assumption Parish, Louisiana, tech summary.

    Science.gov (United States)

    2014-01-01

    The sinkhole located in Assumption Parish, Louisiana, threatens the stability of Highway 70, a state maintained route. In order to : mitigate the potential damaging e ects of the sinkhole on this infrastructure, the Louisiana Department of Transpo...

  4. Understanding the scale of the single ion free energy: A critical test of the tetra-phenyl arsonium and tetra-phenyl borate assumption

    Science.gov (United States)

    Duignan, Timothy T.; Baer, Marcel D.; Mundy, Christopher J.

    2018-06-01

    The tetra-phenyl arsonium and tetra-phenyl borate (TATB) assumption is a commonly used extra-thermodynamic assumption that allows single ion free energies to be split into cationic and anionic contributions. The assumption is that the values for the TATB salt can be divided equally. This is justified by arguing that these large hydrophobic ions will cause a symmetric response in water. Experimental and classical simulation work has raised potential flaws with this assumption, indicating that hydrogen bonding with the phenyl ring may favor the solvation of the TB- anion. Here, we perform ab initio molecular dynamics simulations of these ions in bulk water demonstrating that there are significant structural differences. We quantify our findings by reproducing the experimentally observed vibrational shift for the TB- anion and confirm that this is associated with hydrogen bonding with the phenyl rings. Finally, we demonstrate that this results in a substantial energetic preference of the water to solvate the anion. Our results suggest that the validity of the TATB assumption, which is still widely used today, should be reconsidered experimentally in order to properly reference single ion solvation free energy, enthalpy, and entropy.

  5. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  6. Influence of simulation assumptions and input parameters on energy balance calculations of residential buildings

    International Nuclear Information System (INIS)

    Dodoo, Ambrose; Tettey, Uniben Yao Ayikoe; Gustavsson, Leif

    2017-01-01

    In this study, we modelled the influence of different simulation assumptions on energy balances of two variants of a residential building, comprising the building in its existing state and with energy-efficient improvements. We explored how selected parameter combinations and variations affect the energy balances of the building configurations. The selected parameters encompass outdoor microclimate, building thermal envelope and household electrical equipment including technical installations. Our modelling takes into account hourly as well as seasonal profiles of different internal heat gains. The results suggest that the impact of parameter interactions on calculated space heating of buildings is somewhat small and relatively more noticeable for an energy-efficient building in contrast to a conventional building. We find that the influence of parameters combinations is more apparent as more individual parameters are varied. The simulations show that a building's calculated space heating demand is significantly influenced by how heat gains from electrical equipment are modelled. For the analyzed building versions, calculated final energy for space heating differs by 9–14 kWh/m"2 depending on the assumed energy efficiency level for electrical equipment. The influence of electrical equipment on calculated final space heating is proportionally more significant for an energy-efficient building compared to a conventional building. This study shows the influence of different simulation assumptions and parameter combinations when varied simultaneously. - Highlights: • Energy balances are modelled for conventional and efficient variants of a building. • Influence of assumptions and parameter combinations and variations are explored. • Parameter interactions influence is apparent as more single parameters are varied. • Calculated space heating demand is notably affected by how heat gains are modelled.

  7. An accurate tangential force-displacement model for granular-flow simulations: Contacting spheres with plastic deformation, force-driven formulation

    International Nuclear Information System (INIS)

    Vu-Quoc, L.; Lesburg, L.; Zhang, X.

    2004-01-01

    An elasto-plastic frictional tangential force-displacement (TFD) model for spheres in contact for accurate and efficient granular-flow simulations is presented in this paper; the present TFD is consistent with the elasto-plastic normal force-displacement (NFD) model presented in [ASME Journal of Applied Mechanics 67 (2) (2000) 363; Proceedings of the Royal Society of London, Series A 455 (1991) (1999) 4013]. The proposed elasto-plastic frictional TFD model is accurate, and is validated against non-linear finite-element analyses involving plastic flows under both loading and unloading conditions. The novelty of the present TFD model lies in (i) the additive decomposition of the elasto-plastic contact area radius into an elastic part and a plastic part, (ii) the correction of the particles' radii at the contact point, and (iii) the correction of the particles' elastic moduli. The correction of the contact-area radius represents an effect of plastic deformation in colliding particles; the correction of the radius of curvature represents a permanent indentation after impact; the correction of the elastic moduli represents a softening of the material due to plastic flow. The construction of both the present elasto-plastic frictional TFD model and its consistent companion, the elasto-plastic NFD model, parallels the formalism of the continuum theory of elasto-plasticity. Both NFD and TFD models form a coherent set of force-displacement (FD) models not available hitherto for granular-flow simulations, and are consistent with the Hertz, Cattaneo, Mindlin, Deresiewicz contact mechanics theory. Together, these FD models will allow for efficient simulations of granular flows (or granular gases) involving a large number of particles

  8. Breakdown of Hydrostatic Assumption in Tidal Channel with Scour Holes

    Directory of Open Access Journals (Sweden)

    Chunyan Li

    2016-10-01

    Full Text Available Hydrostatic condition is a common assumption in tidal and subtidal motions in oceans and estuaries.. Theories with this assumption have been largely successful. However, there is no definite criteria separating the hydrostatic from the non-hydrostatic regimes in real applications because real problems often times have multiple scales. With increased refinement of high resolution numerical models encompassing smaller and smaller spatial scales, the need for non-hydrostatic models is increasing. To evaluate the vertical motion over bathymetric changes in tidal channels and assess the validity of the hydrostatic approximation, we conducted observations using a vessel-based acoustic Doppler current profiler (ADCP. Observations were made along a straight channel 18 times over two scour holes of 25 m deep, separated by 330 m, in and out of an otherwise flat 8 m deep tidal pass leading to the Lake Pontchartrain over a time period of 8 hours covering part of the diurnal tidal cycle. Out of the 18 passages over the scour holes, 11 of them showed strong upwelling and downwelling which resulted in the breakdown of hydrostatic condition. The maximum observed vertical velocity was ~ 0.35 m/s, a high value in a tidal channel, and the estimated vertical acceleration reached a high value of 1.76×10-2 m/s2. Analysis demonstrated that the barotropic non-hydrostatic acceleration was dominant. The cause of the non-hydrostatic flow was the that over steep slopes. This demonstrates that in such a system, the bathymetric variation can lead to the breakdown of hydrostatic conditions. Models with hydrostatic restrictions will not be able to correctly capture the dynamics in such a system with significant bathymetric variations particularly during strong tidal currents.

  9. Why we still don't understand the social aspects of wind power: A critique of key assumptions within the literature

    International Nuclear Information System (INIS)

    Aitken, Mhairi

    2010-01-01

    The literature on public attitudes to wind power is underpinned by key assumptions which limit its scope and restrict the findings it can present. Five key assumptions are that: (1) The majority of the public supports wind power. (2) Opposition to wind power is therefore deviant. (3) Opponents are ignorant or misinformed. (4) The reason for understanding opposition is to overcome it. (5) Trust is key. The paper calls for critical reflection on each of these assumptions. It should not be assumed that opposition to wind power is deviant/illegitimate. Opposition cannot be dismissed as ignorant or misinformed instead it must be acknowledged that objectors are often very knowledgeable. Public attitudes and responses to wind power should not be examined in order to mitigate potential future opposition, but rather in order to understand the social context of renewable energy. Trust is identified as a key issue, however greater trust must be placed in members of the public and in their knowledge. In sum, the literature must abandon the assumption that it knows who is 'right' and instead must engage with the possibility that objectors to wind power are not always 'wrong'.

  10. Posttraumatic Growth and Shattered World Assumptions Among Ex-POWs

    DEFF Research Database (Denmark)

    Lahav, Y.; Bellin, Elisheva S.; Solomon, Z.

    2016-01-01

    Objective: The controversy regarding the nature of posttraumatic growth (PTG) includes two main competing claims: one which argues that PTG reflects authentic positive changes and the other which argues that PTG reflects illusionary defenses. The former also suggests that PTG evolves from shattered...... world assumptions (WAs) and that the co-occurrence of high PTG and negative WAs among trauma survivors reflects reconstruction of an integrative belief system. The present study aimed to test these claims by investigating, for the first time, the mediating role of dissociation in the relation between...... PTG and WAs. Method: Former prisoners of war (ex-POWs; n = 158) and comparable controls (n = 106) were assessed 38 years after the Yom Kippur War. Results: Ex-POWs endorsed more negative WAs and higher PTG and dissociation compared to controls. Ex-POWs with posttraumatic stress disorder (PTSD...

  11. Monitoring long-lasting insecticidal net (LLIN) durability to validate net serviceable life assumptions, in Rwanda

    NARCIS (Netherlands)

    Hakizimana, E.; Cyubahiro, B.; Rukundo, A.; Kabayiza, A.; Mutabazi, A.; Beach, R.; Patel, R.; Tongren, J.E.; Karema, C.

    2014-01-01

    Background To validate assumptions about the length of the distribution–replacement cycle for long-lasting insecticidal nets (LLINs) in Rwanda, the Malaria and other Parasitic Diseases Division, Rwanda Ministry of Health, used World Health Organization methods to independently confirm the three-year

  12. Load assumption for fatigue design of structures and components counting methods, safety aspects, practical application

    CERN Document Server

    Köhler, Michael; Pötter, Kurt; Zenner, Harald

    2017-01-01

    Understanding the fatigue behaviour of structural components under variable load amplitude is an essential prerequisite for safe and reliable light-weight design. For designing and dimensioning, the expected stress (load) is compared with the capacity to withstand loads (fatigue strength). In this process, the safety necessary for each particular application must be ensured. A prerequisite for ensuring the required fatigue strength is a reliable load assumption. The authors describe the transformation of the stress- and load-time functions which have been measured under operational conditions to spectra or matrices with the application of counting methods. The aspects which must be considered for ensuring a reliable load assumption for designing and dimensioning are discussed in detail. Furthermore, the theoretical background for estimating the fatigue life of structural components is explained, and the procedures are discussed for numerous applications in practice. One of the prime intentions of the authors ...

  13. Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics

    Science.gov (United States)

    García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team

    2016-06-01

    We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey

  14. On the derivation of approximations to cellular automata models and the assumption of independence.

    Science.gov (United States)

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. FIGHTING THE CLASSICAL CRIME-SCENE ASSUMPTIONS. CRITICAL ASPECTS IN ESTABLISHING THE CRIME-SCENE PERIMETER IN COMPUTER-BASED EVIDENCE CASES

    Directory of Open Access Journals (Sweden)

    Cristina DRIGĂ

    2016-05-01

    Full Text Available Physical-world forensic investigation has the luxury of being tied to the sciences governing the investigated space, hence some assumptions can be made with some degree of certainty when investigating a crime. Cyberspace on the other hand, has a dual nature comprising both a physical layer susceptible of scientific analysis, and a virtual layer governed entirely by the conventions established between the various actors involved at a certain moment in time, defining the actual digital landscape and being the layer where the actual facts relevant from the legal point of view occur. This distinct nature renders unusable many of the assumptions which the legal professionals and the courts of law are used to operate with. The article intends to identify the most important features of cyberspace having immediate legal consequences, with the purpose to establish new and safe assumptions from the legal professional's perspective when cross-examining facts that occurred in cyberspace.

  16. Tests of data quality, scaling assumptions, and reliability of the Danish SF-36

    DEFF Research Database (Denmark)

    Bjorner, J B; Damsgaard, M T; Watt, T

    1998-01-01

    We used general population data (n = 4084) to examine data completeness, response consistency, tests of scaling assumptions, and reliability of the Danish SF-36 Health Survey. We compared traditional multitrait scaling analyses to analyses using polychoric correlations and Spearman correlations...... with chronic diseases excepted). Concerning correlation methods, we found interesting differences indicating advantages of using methods that do not assume a normal distribution of answers as an addition to traditional methods....

  17. Factor structure and concurrent validity of the world assumptions scale.

    Science.gov (United States)

    Elklit, Ask; Shevlin, Mark; Solomon, Zahava; Dekel, Rachel

    2007-06-01

    The factor structure of the World Assumptions Scale (WAS) was assessed by means of confirmatory factor analysis. The sample was comprised of 1,710 participants who had been exposed to trauma that resulted in whiplash. Four alternative models were specified and estimated using LISREL 8.72. A correlated 8-factor solution was the best explanation of the sample data. The estimates of reliability of eight subscales of the WAS ranged from .48 to .82. Scores from five subscales correlated significantly with trauma severity as measured by the Harvard Trauma Questionnaire, although the magnitude of the correlations was low to modest, ranging from .08 to -.43. It is suggested that the WAS has adequate psychometric properties for use in both clinical and research settings.

  18. Comparison of build-up region doses in oblique tangential 6 MV photon beams calculated by AAA and CCC algorithms in breast Rando phantom

    Science.gov (United States)

    Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.

    2016-03-01

    The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.

  19. Bayou Corne Sinkhole: Control Measurements of State Highway 70 in Assumption Parish, Louisiana : Research Project Capsule

    Science.gov (United States)

    2012-09-01

    The sinkhole located in northern Assumption Parish, Louisiana, threatens : the stability of Highway 70, a state-maintained route. In order to monitor : and mitigate potential damage eff ects on this infrastructure, the Louisiana : Department of Trans...

  20. 77 FR 74353 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-12-14

    ... regulation will be 0.75 percent for the period during which a benefit is in pay status and 4.00 percent... PENSION BENEFIT GUARANTY CORPORATION 29 CFR Part 4022 Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits AGENCY: Pension Benefit Guaranty Corporation...