WorldWideScience

Sample records for tangentially geostrophic assumptions

  1. Geostrophic Vortex Dynamics

    Science.gov (United States)

    1988-10-01

    Dinamica del Cima. Scuola internazionale di fisica Enrico Fermi, LXXXVIII, ppl33-158. Saffman, P.G. and Szeto, R. 1980 Equilibrium shapes of a pair of...10, 25-52. Salmon, R. 1982 Geostrophic Turbulence. In Topics in Ocean Physics , Scuola internazionale di fisica Enrico Fermi, Varenna, Italy, pp.30-78...90089-0371 observatory Gifts & Exchanges Colombia University Library Palisades, NY 10964 Bedford Institute of Oceanography Libra P.O. Box 1006

  2. Quasi-geostrophic dynamo theory

    Science.gov (United States)

    Calkins, Michael A.

    2018-03-01

    The asymptotic theory of rapidly rotating, convection-driven dynamos in a plane layer is discussed. A key characteristic of these quasi-geostrophic dynamos is that the Lorentz force is comparable in magnitude to the ageostrophic component of the Coriolis force, rather than the leading order component that yields geostrophy. This characteristic is consistent with both observations of planetary dynamos and numerical dynamo investigations, where the traditional Elssasser number, ΛT = O (1) . Thus, while numerical dynamo simulations currently cannot access the strongly turbulent flows that are thought to be characteristic of planetary interiors, it is argued that they are in the appropriate geostrophically balanced regime provided that inertial and viscous forces are both small relative to the leading order Coriolis force. Four distinct quasi-geostrophic dynamo regimes are discussed, with each regime characterized by a unique magnetic to kinetic energy density ratio and differing dynamics. The axial torque due to the Lorentz force is shown to be asymptotically small for such quasi-geostrophic dynamos, suggesting that 'Taylor's constraint' represents an ambiguous measure of the primary force balance in a rapidly rotating dynamo.

  3. Effects of anisotropy in geostrophic turbulence

    Czech Academy of Sciences Publication Activity Database

    Hejda, Pavel; Reshetnyak, M.

    2009-01-01

    Roč. 177, č. 3-4 (2009), s. 152-160 ISSN 0031-9201 R&D Projects: GA AV ČR IAA300120704 Institutional research plan: CEZ:AV0Z30120515 Keywords : liquid core * thermal convection * geostrophic balance * cascade processes Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 1.993, year: 2009

  4. Energetics of geostrophic adjustment in rotating flow

    Science.gov (United States)

    Juan, Fang; Rongsheng, Wu

    2002-09-01

    Energetics of geostrophic adjustment in rotating flow is examined in detail with a linear shallow water model. The initial unbalanced flow considered first falls tinder two classes. The first is similar to that adopted by Gill and is here referred to as a mass imbalance model, for the flow is initially motionless but with a sea surface displacement. The other is the same as that considered by Rossby and is referred to as a momentum imbalance model since there is only a velocity perturbation in the initial field. The significant feature of the energetics of geostrophic adjustment for the above two extreme models is that although the energy conversion ratio has a large case-to-case variability for different initial conditions, its value is bounded below by 0 and above by 1 / 2. Based on the discussion of the above extreme models, the energetics of adjustment for an arbitrary initial condition is investigated. It is found that the characteristics of the energetics of geostrophic adjustment mentioned above are also applicable to adjustment of the general unbalanced flow under the condition that the energy conversion ratio is redefined as the conversion ratio between the change of kinetic energy and potential energy of the deviational fields.

  5. Currents, Geostrophic, Aviso, 0.25 degrees, Global, Zonal

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Aviso Zonal Geostrophic Current is inferred from Sea Surface Height Deviation, climatological dynamic height, and basic fluid mechanics.

  6. Currents, Geostrophic, Aviso, 0.25 degrees, Global, Meridional

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Aviso Meridional Geostrophic Current is inferred from Sea Surface Height Deviation, climatological dynamic height, and basic fluid mechanics.

  7. Transition to geostrophic convection: the role of the boundary conditions

    NARCIS (Netherlands)

    Kunnen, R.P.J.; Ostilla-Monico, Rodolfo; van der Poel, Erwin; Verzicco, Roberto; Lohse, Detlef

    2016-01-01

    Rotating Rayleigh–Bénard convection, the flow in a rotating fluid layer heated from below and cooled from above, is used to analyse the transition to the geostrophic regime of thermal convection. In the geostrophic regime, which is of direct relevance to most geo- and astrophysical flows, the system

  8. Do uniform tangential interfacial stresses enhance adhesion?

    Science.gov (United States)

    Menga, Nicola; Carbone, Giuseppe; Dini, Daniele

    2018-03-01

    We present theoretical arguments, based on linear elasticity and thermodynamics, to show that interfacial tangential stresses in sliding adhesive soft contacts may lead to a significant increase of the effective energy of adhesion. A sizable expansion of the contact area is predicted in conditions corresponding to such scenario. These results are easily explained and are valid under the assumptions that: (i) sliding at the interface does not lead to any loss of adhesive interaction and (ii) spatial fluctuations of frictional stresses can be considered negligible. Our results are seemingly supported by existing experiments, and show that frictional stresses may lead to an increase of the effective energy of adhesion depending on which conditions are established at the interface of contacting bodies in the presence of adhesive forces.

  9. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  10. Quasi-geostrophic dynamics in the presence of moisture gradients

    OpenAIRE

    Monteiro, Joy M.; Sukhatme, Jai

    2016-01-01

    The derivation of a quasi-geostrophic (QG) system from the rotating shallow water equations on a midlatitude beta-plane coupled with moisture is presented. Condensation is prescribed to occur whenever the moisture at a point exceeds a prescribed saturation value. It is seen that a slow condensation time scale is required to obtain a consistent set of equations at leading order. Further, since the advecting wind fields are geostrophic, changes in moisture (and hence, precipitation) occur only ...

  11. Referencing geostrophic velocities using ADCP data Referencing geostrophic velocities using ADCP data

    Directory of Open Access Journals (Sweden)

    Isis Comas-Rodríguez

    2010-06-01

    Full Text Available Acoustic Doppler Current Profilers (ADCPs have proven to be a useful oceanographic tool in the study of ocean dynamics. Data from D279, a transatlantic hydrographic cruise carried out in spring 2004 along 24.5°N, were processed, and lowered ADCP (LADCP bottom track data were used to assess the choice of reference velocity for geostrophic calculations. The reference velocities from different combinations of ADCP data were compared to one another and a reference velocity was chosen based on the LADCP data. The barotropic tidal component was subtracted to provide a final reference velocity estimated by LADCP data. The results of the velocity fields are also shown. Further studies involving inverse solutions will include the reference velocity calculated here.

  12. An approximate geostrophic streamfunction for use in density surfaces

    Science.gov (United States)

    McDougall, Trevor J.; Klocker, Andreas

    An approximate expression is derived for the geostrophic streamfunction in approximately neutral surfaces, φn, namely φ={1}/{2}Δpδ˜˜-{1}/{12}{T}/{bΘρ}ΔΘΔ-∫0pδ˜˜ dp'. This expression involves the specific volume anomaly δ˜˜ defined with respect to a reference point (S,Θ˜˜,p˜˜) on the surface, Δ p and ΔΘ are the differences in pressure and Conservative Temperature with respect to p˜˜ and Θ˜˜, respectively, and TbΘ is the thermobaric coefficient. This geostrophic streamfunction is shown to be more accurate than previously available choices of geostrophic streamfunction such as the Montgomery streamfunction. Also, by writing expressions for the horizontal differences on a regular horizontal grid of a localized form of the above geostrophic streamfunction, an over-determined set of equations is developed and solved to numerically obtain a very accurate geostrophic streamfunction on an approximately neutral surface; the remaining error in this streamfunction is caused only by neutral helicity.

  13. Random forcing of geostrophic motion in rotating stratified turbulence

    Science.gov (United States)

    Waite, Michael L.

    2017-12-01

    Random forcing of geostrophic motion is a common approach in idealized simulations of rotating stratified turbulence. Such forcing represents the injection of energy into large-scale balanced motion, and the resulting breakdown of quasi-geostrophic turbulence into inertia-gravity waves and stratified turbulence can shed light on the turbulent cascade processes of the atmospheric mesoscale. White noise forcing is commonly employed, which excites all frequencies equally, including frequencies much higher than the natural frequencies of large-scale vortices. In this paper, the effects of these high frequencies in the forcing are investigated. Geostrophic motion is randomly forced with red noise over a range of decorrelation time scales τ, from a few time steps to twice the large-scale vortex time scale. It is found that short τ (i.e., nearly white noise) results in about 46% more gravity wave energy than longer τ, despite the fact that waves are not directly forced. We argue that this effect is due to wave-vortex interactions, through which the high frequencies in the forcing are able to excite waves at their natural frequencies. It is concluded that white noise forcing should be avoided, even if it is only applied to the geostrophic motion, when a careful investigation of spontaneous wave generation is needed.

  14. Retinal Changes Induced by Epiretinal Tangential Forces

    Directory of Open Access Journals (Sweden)

    Mario R. Romano

    2015-01-01

    Full Text Available Two kinds of forces are active in vitreoretinal traction diseases: tangential and anterior-posterior forces. However, tangential forces are less characterized and classified in literature compared to the anterior-posterior ones. Tangential epiretinal forces are mainly due to anomalous posterior vitreous detachment (PVD, vitreoschisis, vitreopapillary adhesion (VPA, and epiretinal membranes (ERMs. Anomalous PVD plays a key role in the formation of the tangential vectorial forces on the retinal surface as consequence of gel liquefaction (synchysis without sufficient and fast vitreous dehiscence at the vitreoretinal interface. The anomalous and persistent adherence of the posterior hyaloid to the retina can lead to vitreomacular/vitreopapillary adhesion or to a formation of avascular fibrocellular tissue (ERM resulting from the proliferation and transdifferentiation of hyalocytes resident in the cortical vitreous remnants after vitreoschisis. The right interpretation of the forces involved in the epiretinal tangential tractions helps in a better definition of diagnosis, progression, prognosis, and surgical outcomes of vitreomacular interfaces.

  15. Construction of tangential injection NBI system

    International Nuclear Information System (INIS)

    Ohga, Tokumichi; Akino, Noboru; Ebisawa, Noboru

    1995-09-01

    In the Upgrading of the JT-60, the vacuum vessel has been modified to a larger bore. This larger bore vacuum vessel yields a larger toroidal field ripple in the vicinity of a plasma surface because of closing the toroidal field coils and plasmas. A ripple loss of injected neutral beams, then, estimated to be 30-40% through ripple field in the beam injection with the present NBI system that injects the beam perpendicularly to the plasma. An effective way to decrease the ripple loss in the plasma is to inject the beam tangentially. Meanwhile, it has been determined possible with the JT-60 upgrading to use a horizontal port as a tangential beam injection, because of eliminating a group of outer horizontal poloidal coils which are used as a divertor coil in the former JT-60. The modification from perpendicular beamline to tangential one has been executed in four beamlines out of 14 units. Four tangential beamlines are installed in two beamline tanks which are newly fabricated and positioned co- and counter-injection, respectively. Most of the beamline components are reused except a couple of cancellation coils. The modification to the tangential beamline completed in 1993, and the beam injection experiments with the tangential have been conducted successfully since 1993. (author)

  16. Arctic Ocean surface geostrophic circulation 2003–2014

    Directory of Open Access Journals (Sweden)

    T. W. K. Armitage

    2017-07-01

    Full Text Available Monitoring the surface circulation of the ice-covered Arctic Ocean is generally limited in space, time or both. We present a new 12-year record of geostrophic currents at monthly resolution in the ice-covered and ice-free Arctic Ocean derived from satellite radar altimetry and characterise their seasonal to decadal variability from 2003 to 2014, a period of rapid environmental change in the Arctic. Geostrophic currents around the Arctic basin increased in the late 2000s, with the largest increases observed in summer. Currents in the southeastern Beaufort Gyre accelerated in late 2007 with higher current speeds sustained until 2011, after which they decreased to speeds representative of the period 2003–2006. The strength of the northwestward current in the southwest Beaufort Gyre more than doubled between 2003 and 2014. This pattern of changing currents is linked to shifting of the gyre circulation to the northwest during the time period. The Beaufort Gyre circulation and Fram Strait current are strongest in winter, modulated by the seasonal strength of the atmospheric circulation. We find high eddy kinetic energy (EKE congruent with features of the seafloor bathymetry that are greater in winter than summer, and estimates of EKE and eddy diffusivity in the Beaufort Sea are consistent with those predicted from theoretical considerations. The variability of Arctic Ocean geostrophic circulation highlights the interplay between seasonally variable atmospheric forcing and ice conditions, on a backdrop of long-term changes to the Arctic sea ice–ocean system. Studies point to various mechanisms influencing the observed increase in Arctic Ocean surface stress, and hence geostrophic currents, in the 2000s – e.g. decreased ice concentration/thickness, changing atmospheric forcing, changing ice pack morphology; however, more work is needed to refine the representation of atmosphere–ice–ocean coupling in models before we can fully

  17. Hydromagnetic quasi-geostrophic modes in rapidly rotating planetary cores

    DEFF Research Database (Denmark)

    Canet, E.; Finlay, Chris; Fournier, A.

    2014-01-01

    The core of a terrestrial-type planet consists of a spherical shell of rapidly rotating, electrically conducting, fluid. Such a body supports two distinct classes of quasi-geostrophic (QG) eigenmodes: fast, primarily hydrodynamic, inertial modes with period related to the rotation time scale......, or shorter than, their oscillation time scale.Based on our analysis, we expect Mercury to be in a regime where the slow magnetic modes are of quasi-free decay type. Earth and possibly Ganymede, with their larger Elsasser numbers, may possess slow modes that are in the transition regime of weak diffusion...

  18. Classical Solutions to Semi-geostrophic System with Variable Coriolis Parameter

    Science.gov (United States)

    Cheng, Jingrui; Cullen, Michael; Feldman, Mikhail

    2018-01-01

    We prove the short time existence and uniqueness of smooth solutions (in {C^{k+2,α}} with {k ≥q 2}) to the 2-D semi-geostrophic system and the semi-geostrophic shallow water system with variable Coriolis parameter f and periodic boundary conditions, under the natural convexity condition on the initial data. The dual space used in analysis of the semi-geostrophic system with constant f is not available for the variable Coriolis parameter case, and we develop a time-stepping procedure in Lagrangian coordinates in the physical space to overcome this difficulty.

  19. Variational four-dimensional analysis using quasi-geostrophic constraints

    Science.gov (United States)

    Derber, John C.

    1987-01-01

    A variational four-dimensional analysis technique using quasi-geostrophic models as constraints is examined using gridded fields as data. The analysis method uses a standard iterative nonlinear minimization technique to find the solution to the constraining forecast model which best fits the data as measured by a predefined functional. The minimization algorithm uses the derivative of the functional with respect to each of the initial condition values. This derivative vector is found by inserting the weighted differences between the model solution and the inserted data into a backwards integrating adjoint model. The four-dimensional analysis system was examined by applying it to fields created from a primitive equations model forecast and to fields created from satellite retrievals. The results show that the technique has several interesting characteristics not found in more traditional four-dimensional assimilation techniques. These features include a close fit of the model solution to the observations throughout the analysis interval and an insensitivity to the frequency of data insertion or the amount of data. The four-dimensional analysis technique is very versatile and can be extended to more complex problems with little theoretical difficulty.

  20. Experimental Highlights upon Tangential Percussions in Mechanical Systems

    Directory of Open Access Journals (Sweden)

    Stelian Alaci

    2014-12-01

    Full Text Available The paper presents a proposed method for underlying the presence of tangential percussions occurring in multibody systems. Tangential percussions define a relatively newly introduced concept required by the necessity of explaining the sudden change in the state of motion for two bodies interacting only on a direction from the common tangent plane. In robotics domain, normal and tangential percussions are widely met in the case of robotic hands in the moment of contacting the manipulated object.

  1. Conversational Characteristics of Children with Fragile X Syndrome: Tangential Language.

    Science.gov (United States)

    Sudhalter, Vicki; Belser, Richard C.

    2001-01-01

    The production of tangential language during conversations was studied with people with fragile X syndrome (n=10), autism (n=10), and mental retardation not caused by fragile X (n=10). Tangential language was found to be more prevalent among those with fragile X compared to the control groups, especially within unsolicited comments. (Contains…

  2. Flashback Analysis in Tangential Swirl Burners

    Directory of Open Access Journals (Sweden)

    Valera-Medina A.

    2011-10-01

    Full Text Available Premixed lean combustion is widely used in Combustion Processes due to the benefits of good flame stability and blowoff limits coupled with low NOx emissions. However, the use of novel fuels and complex flows have increased the concern about flashback, especially for the use of syngas and highly hydrogen enriched blends. Thus, this paper describes a combined practical and numerical approach to study the phenomenon in order to reduce the effect of flashback in a pilot scale 100 kW tangential swirl burner. Natural gas is used to establish the baseline results and effects of different parameters changes. The flashback phenomenon is studied with the use of high speed photography. The use of a central fuel injector demonstrates substantial benefits in terms of flashback resistance, eliminating coherent structures that may appear in the flow channels. The critical boundary velocity gradient is used for characterization, both via the original Lewis and von Elbe formula and via analysis using CFD and investigation of boundary layer conditions in the flame front.

  3. Tangential inlet supersonic separators: a novel apparatus for gas purification

    DEFF Research Database (Denmark)

    Wen, Chuang; Walther, Jens Honore; Yang, Yan

    2016-01-01

    A novel supersonic separator with a tangential inlet is designed to remove the condensable components from gas mixtures. The dynamic parameters of natural gas in the supersonic separation process are numerically calculated using the Reynolds stress turbulence model with the Peng-Robinson real gas...... be generated by the tangential inlet, and it increases to the maximum of 200 m/s at the nozzle throat due to decrease of the nozzle area of the converging part. The tangential velocity can maintain the value of about 160 m/s at the nozzle exit, and correspondingly generates the centrifugal acceleration of 3...

  4. Multiverse Assumptions and Philosophy

    OpenAIRE

    James R. Johnson

    2018-01-01

    Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong) topics such as: infinity, duplicate yous, hypothetical fields, mo...

  5. Kinematic validation of a quasi-geostrophic model for the fast dynamics in the Earth's outer core

    Science.gov (United States)

    Maffei, S.; Jackson, A.

    2017-09-01

    We derive a quasi-geostrophic (QG) system of equations suitable for the description of the Earth's core dynamics on interannual to decadal timescales. Over these timescales, rotation is assumed to be the dominant force and fluid motions are strongly invariant along the direction parallel to the rotation axis. The diffusion-free, QG system derived here is similar to the one derived in Canet et al. but the projection of the governing equations on the equatorial disc is handled via vertical integration and mass conservation is applied to the velocity field. Here we carefully analyse the properties of the resulting equations and we validate them neglecting the action of the Lorentz force in the momentum equation. We derive a novel analytical solution describing the evolution of the magnetic field under these assumptions in the presence of a purely azimuthal flow and an alternative formulation that allows us to numerically solve the evolution equations with a finite element method. The excellent agreement we found with the analytical solution proves that numerical integration of the QG system is possible and that it preserves important physical properties of the magnetic field. Implementation of magnetic diffusion is also briefly considered.

  6. Multiverse Assumptions and Philosophy

    Directory of Open Access Journals (Sweden)

    James R. Johnson

    2018-02-01

    Full Text Available Multiverses are predictions based on theories. Focusing on each theory’s assumptions is key to evaluating a proposed multiverse. Although accepted theories of particle physics and cosmology contain non-intuitive features, multiverse theories entertain a host of “strange” assumptions classified as metaphysical (outside objective experience, concerned with fundamental nature of reality, ideas that cannot be proven right or wrong topics such as: infinity, duplicate yous, hypothetical fields, more than three space dimensions, Hilbert space, advanced civilizations, and reality established by mathematical relationships. It is easy to confuse multiverse proposals because many divergent models exist. This overview defines the characteristics of eleven popular multiverse proposals. The characteristics compared are: initial conditions, values of constants, laws of nature, number of space dimensions, number of universes, and fine tuning explanations. Future scientific experiments may validate selected assumptions; but until they do, proposals by philosophers may be as valid as theoretical scientific theories.

  7. Sensitivity Analysis Without Assumptions.

    Science.gov (United States)

    Ding, Peng; VanderWeele, Tyler J

    2016-05-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.

  8. A first approach of 3D Geostrophic Currents based on GOCE, altimetry and ARGO data

    Science.gov (United States)

    Sempere Beneyto, M. Dolores; Vigo, Isabel; Chao, Ben F.

    2016-04-01

    The most recent advances in the geoid determination, provided by the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission, together with the continuous monitoring of the sea surface height by the altimeters on board of satellites and Argo data makes possible to estimate the ocean geostrophy in 3D. In this work, we present a first approach of the 3D geostrophic circulation for North Atlantic region, from the surface down to 1500 m depth. It has been computed for a 10 years period (2004-2014), using an observation-based approach that combines altimetry with temperature and salinity through the thermal wind equation gridded at one degree longitude and latitude resolution. For validation of the results, the estimated 3D geostrophic circulation is compared with Ocean Circulation Models simulations and/or in-situ data, showing in all cases similar patterns.

  9. Tangential finger forces use mechanical advantage during static grasping.

    Science.gov (United States)

    Slota, Gregory P; Latash, Mark L; Zatsiorsky, Vladimir M

    2012-02-01

    When grasping and manipulating objects, the central controller utilizes the mechanical advantage of the normal forces of the fingers for torque production. Whether the same is valid for tangential forces is unknown. The main purpose of this study was to determine the patterns of finger tangential forces and the use of mechanical advantage as a control mechanism when dealing with objects of nonuniform finger positioning. A complementary goal was to explore the interaction of mechanical advantage (moment arm) and the role a finger has as a torque agonist/antagonist with respect to external torques (±0.4 N m). Five 6-df force/torque transducers measured finger forces while subjects held a prism handle (6 cm width × 9 cm height) with and without a single finger displaced 2 cm (handle width). The effect of increasing the tangential moment arm was significant (p forces (in >70% of trials) and hence creating greater moments. Thus, the data provides evidence that the grasping system as a rule utilizes mechanical advantage for generating tangential forces. The increase in tangential force was independent of whether the finger was acting as a torque agonist or antagonist, revealing their effects to be additive.

  10. Ultrastructural morphometry using dual axes tangential scale: a technical revelation.

    Science.gov (United States)

    Rayat, C S

    2005-04-01

    While performing ultrastructural morphometry, under or over estimation of ultrastructural size could be avoided by using accurate measuring devices. Biological investigators have always relied on conventional linear scale for the baseline measurement of ultrastructural size parameters on electron micrographs to project the dimensions of intracellular organelles or tissue components. Since it was not possible to measure decimal fractions of mm with linear scale, a 'dual axes tangential scale' has been designed for measuring ultrastructural image parameters on electron micrographs with an accuracy of 0.1 mm to minimize the error in finally computed size of ultrastructural component. In an exercise using 'dual axes tangential scale' and 'conventional linear scale', measurement of glomerular basement membrane thickness (GBMT) as orthogonal intercepts across the GBM revealed a 'coefficient of variation' at 4.4% with dual axes tangential scale as compared to 'coefficient of variation' at 10.9% with linear scale, expressing superiority of dual axes tangential scale over linear scale. Use of mathematical formula rather than nomogram has been preferred. However, 'slide guide, ultrastructure size calculator' could also be used for discerning ultrastructural size after measurement with dual axes tangential scale.

  11. World Ocean Geostrophic Velocity Inverted from World Ocean Atlas 2013 with the P-Vector Method (NODC Accession 0121576)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset comprises 3D gridded climatological fields of geostrophic velocity inverted from World Ocean Atlas-2013 (WOA2013) temperature and salinity fields using...

  12. Contextuality under weak assumptions

    International Nuclear Information System (INIS)

    Simmons, Andrew W; Rudolph, Terry; Wallman, Joel J; Pashayan, Hakop; Bartlett, Stephen D

    2017-01-01

    The presence of contextuality in quantum theory was first highlighted by Bell, Kochen and Specker, who discovered that for quantum systems of three or more dimensions, measurements could not be viewed as deterministically revealing pre-existing properties of the system. More precisely, no model can assign deterministic outcomes to the projectors of a quantum measurement in a way that depends only on the projector and not the context (the full set of projectors) in which it appeared, despite the fact that the Born rule probabilities associated with projectors are independent of the context. A more general, operational definition of contextuality introduced by Spekkens, which we will term ‘probabilistic contextuality’, drops the assumption of determinism and allows for operations other than measurements to be considered contextual. Even two-dimensional quantum mechanics can be shown to be contextual under this generalised notion. Probabilistic noncontextuality represents the postulate that elements of an operational theory that cannot be distinguished from each other based on the statistics of arbitrarily many repeated experiments (they give rise to the same operational probabilities) are ontologically identical. In this paper, we introduce a framework that enables us to distinguish between different noncontextuality assumptions in terms of the relationships between the ontological representations of objects in the theory given a certain relation between their operational representations. This framework can be used to motivate and define a ‘possibilistic’ analogue, encapsulating the idea that elements of an operational theory that cannot be unambiguously distinguished operationally can also not be unambiguously distinguished ontologically. We then prove that possibilistic noncontextuality is equivalent to an alternative notion of noncontextuality proposed by Hardy. Finally, we demonstrate that these weaker noncontextuality assumptions are sufficient to prove

  13. Countercurrent tangential chromatography for large-scale protein purification.

    Science.gov (United States)

    Shinkazh, Oleg; Kanani, Dharmesh; Barth, Morgan; Long, Matthew; Hussain, Daniar; Zydney, Andrew L

    2011-03-01

    Recent advances in cell culture technology have created significant pressure on the downstream purification process, leading to a "downstream bottleneck" in the production of recombinant therapeutic proteins for the treatment of cancer, genetic disorders, and cardiovascular disease. Countercurrent tangential chromatography overcomes many of the limitations of conventional column chromatography by having the resin (in the form of a slurry) flow through a series of static mixers and hollow fiber membrane modules. The buffers used in the binding, washing, and elution steps flow countercurrent to the resin, enabling high-resolution separations while reducing the amount of buffer needed for protein purification. The results obtained in this study provide the first experimental demonstration of the feasibility of using countercurrent tangential chromatography for the separation of a model protein mixture containing bovine serum albumin and myoglobin using a commercially available anion exchange resin. Batch uptake/desorption experiments were used in combination with critical flux data for the hollow fiber filters to design the countercurrent tangential chromatography system. A two-stage batch separation yielded the purified target protein at >99% purity with 94% recovery. The results clearly demonstrate the potential of using countercurrent tangential chromatography for the large-scale purification of therapeutic proteins. Copyright © 2010 Wiley Periodicals, Inc.

  14. Linking assumptions in amblyopia

    Science.gov (United States)

    LEVI, DENNIS M.

    2017-01-01

    Over the last 35 years or so, there has been substantial progress in revealing and characterizing the many interesting and sometimes mysterious sensory abnormalities that accompany amblyopia. A goal of many of the studies has been to try to make the link between the sensory losses and the underlying neural losses, resulting in several hypotheses about the site, nature, and cause of amblyopia. This article reviews some of these hypotheses, and the assumptions that link the sensory losses to specific physiological alterations in the brain. Despite intensive study, it turns out to be quite difficult to make a simple linking hypothesis, at least at the level of single neurons, and the locus of the sensory loss remains elusive. It is now clear that the simplest notion—that reduced contrast sensitivity of neurons in cortical area V1 explains the reduction in contrast sensitivity—is too simplistic. Considerations of noise, noise correlations, pooling, and the weighting of information also play a critically important role in making perceptual decisions, and our current models of amblyopia do not adequately take these into account. Indeed, although the reduction of contrast sensitivity is generally considered to reflect “early” neural changes, it seems plausible that it reflects changes at many stages of visual processing. PMID:23879956

  15. Testing Our Fundamental Assumptions

    Science.gov (United States)

    Kohler, Susanna

    2016-06-01

    Science is all about testing the things we take for granted including some of the most fundamental aspects of how we understand our universe. Is the speed of light in a vacuum the same for all photons regardless of their energy? Is the rest mass of a photon actually zero? A series of recent studies explore the possibility of using transient astrophysical sources for tests!Explaining Different Arrival TimesArtists illustration of a gamma-ray burst, another extragalactic transient, in a star-forming region. [NASA/Swift/Mary Pat Hrybyk-Keith and John Jones]Suppose you observe a distant transient astrophysical source like a gamma-ray burst, or a flare from an active nucleus and two photons of different energies arrive at your telescope at different times. This difference in arrival times could be due to several different factors, depending on how deeply you want to question some of our fundamental assumptions about physics:Intrinsic delayThe photons may simply have been emitted at two different times by the astrophysical source.Delay due to Lorentz invariance violationPerhaps the assumption that all massless particles (even two photons with different energies) move at the exact same velocity in a vacuum is incorrect.Special-relativistic delayMaybe there is a universal speed for massless particles, but the assumption that photons have zero rest mass is wrong. This, too, would cause photon velocities to be energy-dependent.Delay due to gravitational potentialPerhaps our understanding of the gravitational potential that the photons experience as they travel is incorrect, also causing different flight times for photons of different energies. This would mean that Einsteins equivalence principle, a fundamental tenet of general relativity (GR), is incorrect.If we now turn this problem around, then by measuring the arrival time delay between photons of different energies from various astrophysical sources the further away, the better we can provide constraints on these

  16. Sea level anomaly on the Patagonian continental shelf: Trends, annual patterns and geostrophic flows

    Science.gov (United States)

    Saraceno, M.; Piola, A. R.; Strub, P. T.

    2016-01-01

    Abstract We study the annual patterns and linear trend of satellite sea level anomaly (SLA) over the southwest South Atlantic continental shelf (SWACS) between 54ºS and 36ºS. Results show that south of 42°S the thermal steric effect explains nearly 100% of the annual amplitude of the SLA, while north of 42°S it explains less than 60%. This difference is due to the halosteric contribution. The annual wind variability plays a minor role over the whole continental shelf. The temporal linear trend in SLA ranges between 1 and 5 mm/yr (95% confidence level). The largest linear trends are found north of 39°S, at 42°S and at 50°S. We propose that in the northern region the large positive linear trends are associated with local changes in the density field caused by advective effects in response to a southward displacement of the South Atlantic High. The causes of the relative large SLA trends in two southern coastal regions are discussed as a function meridional wind stress and river discharge. Finally, we combined the annual cycle of SLA with the mean dynamic topography to estimate the absolute geostrophic velocities. This approach provides the first comprehensive description of the seasonal component of SWACS circulation based on satellite observations. The general circulation of the SWACS is northeastward with stronger/weaker geostrophic currents in austral summer/winter. At all latitudes, geostrophic velocities are larger (up to 20 cm/s) close to the shelf‐break and decrease toward the coast. This spatio‐temporal pattern is more intense north of 45°S. PMID:27840784

  17. Toward an extended-geostrophic Euler-Poincare model for mesoscale oceanographic flow

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.S.; Newberger, P.A. [Oregon State Univ., Corvallis, OR (United States). Coll. of Oceanic and Atmospheric Sciences; Holm, D.D. [Los Alamos National Lab., NM (United States)

    1998-07-01

    The authors consider the motion of a rotating, continuously stratified fluid governed by the hydrostatic primitive equations (PE). An approximate Hamiltonian (L1) model for small Rossby number {var_epsilon} is derived for application to mesoscale oceanographic flow problems. Numerical experiments involving a baroclinically unstable oceanic jet are utilized to assess the accuracy of the L1 model compared to the PE and to other approximate models, such as the quasigeostrophic (QG) and the geostrophic momentum (GM) equations. The results of the numerical experiments for moderate Rossby number flow show that the L1 model gives accurate solutions with errors substantially smaller than QG or GM.

  18. Absolute geostrophic velocities off the coast of Southern Peru as observed from glider data

    Science.gov (United States)

    Pietri, A.; Testor, P.; Echevin, V.; Chaigneau, A.; Mortier, L.; Eldin, G.; Grados, C.; Albert, A.

    2012-04-01

    The upwelling system off southern Peru has been observed using autonomous underwater vehicles (Slocum gliders) during two glider missions in October-November 2008 (Austral Spring) and April-May 2010 (Austral Autumn). Cross-front sections carried out in the intense upwelling cell near 14°S provide information on the geostrophic transport variability. During the first mission, the glider completed nine consecutive sections of ~100 km down to 200 m depth perpendicular to the continental slope, allowing to measure the equatorward surface jet. During the second one, six sections of ~100 km down to 1000 m deep allow to characterize the deeper vertical structure of the current system. Estimates of alongshore absolute geostrophic velocities were inferred from the density field and the glider drift between two dives. An equatorward surface current with a maximum of 30 cm/s was identified as the Peru Chile Current and a subsurface poleward current with a maximum of 15 cm/s as the Peru Chile Undercurrent. In April-May 2010, a remarkable subsurface equatorward current of ~ 10 cm/s was observed above the continental slope and between 250 and 1000 m deep. The coastal current system, more particularly the subsurface equatorward current, is tentatively linked to the signature of poleward propagating coastal trapped waves, as shown by regional model (ROMS) simulations.

  19. Incorporating geostrophic wind information for improved space–time short-term wind speed forecasting

    KAUST Repository

    Zhu, Xinxin

    2014-09-01

    Accurate short-term wind speed forecasting is needed for the rapid development and efficient operation of wind energy resources. This is, however, a very challenging problem. Although on the large scale, the wind speed is related to atmospheric pressure, temperature, and other meteorological variables, no improvement in forecasting accuracy was found by incorporating air pressure and temperature directly into an advanced space-time statistical forecasting model, the trigonometric direction diurnal (TDD) model. This paper proposes to incorporate the geostrophic wind as a new predictor in the TDD model. The geostrophic wind captures the physical relationship between wind and pressure through the observed approximate balance between the pressure gradient force and the Coriolis acceleration due to the Earth’s rotation. Based on our numerical experiments with data from West Texas, our new method produces more accurate forecasts than does the TDD model using air pressure and temperature for 1to 6-hour-ahead forecasts based on three different evaluation criteria. Furthermore, forecasting errors can be further reduced by using moving average hourly wind speeds to fit the diurnal pattern. For example, our new method obtains between 13.9% and 22.4% overall mean absolute error reduction relative to persistence in 2-hour-ahead forecasts, and between 5.3% and 8.2% reduction relative to the best previous space-time methods in this setting.

  20. Errors of Mean Dynamic Topography and Geostrophic Current Estimates in China's Marginal Seas from GOCE and Satellite Altimetry

    DEFF Research Database (Denmark)

    Jin, Shuanggen; Feng, Guiping; Andersen, Ole Baltazar

    2014-01-01

    The Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and satellite altimetry can provide very detailed and accurate estimates of the mean dynamic topography (MDT) and geostrophic currents in China's marginal seas, such as, the newest high-resolution GOCE gravity field model GO...... and geostrophic current estimates from satellite gravimetry and altimetry are investigated and evaluated in China's marginal seas. The cumulative error in MDT from GOCE is reduced from 22.75 to 9.89 cm when compared to the Gravity Recovery and Climate Experiment (GRACE) gravity field model ITG-Grace2010 results...

  1. Intensity modulated tangential beam irradiation of the intact breast

    International Nuclear Information System (INIS)

    Hong, L.; Hunt, M.; Chui, C.; Forster, K.; Lee, H.; Lutz, W.; Yahalom, J.; Kutcher, G.J.; McCormick, B.

    1997-01-01

    Purpose/Objective: The purpose of this study was to evaluate the potential benefits of intensity modulated tangential beams in the irradiation of the intact breast. The primary goal was to develop an intensity modulated treatment which would substantially decrease the dose to coronary arteries, lung and contralateral breast while still using a standard tangential beam arrangement. Improved target dose homogeneity, within the limits imposed by opposed fields, was also desired. Since a major goal of the study was the development of a technique which was practical for use on a large population of patients, the design of 'standard' intensity profiles analogous in function to conventional wedges was also investigated. Materials and Methods: Three dimensional treatment planning was performed using both conventional and intensity modulated tangential beams. Plans were developed for both the right and left breast for a range of patient sizes and shapes. For each patient, PTV, lung, heart, origin and peripheral branches of the coronary artery, and contralateral breast were contoured. Optimum tangential beam direction and shape were designed using Beams-Eye-View display and then used for both the conventional and intensity modulated plans. For the conventional plan, the optimum wedge combination and beam weighting were chosen based on the dose distribution in a single transverse plane through the field center. Intensity modulated plans were designed using an algorithm which allows the user to specify the prescribed, maximum and minimum acceptable doses and dose volume constraints for each organ of interest. Plans were compared using multiple dose distributions and DVHs. Results: Significant improvements in the doses to critical structures were achieved using the intensity modulated plan. Coronary artery dose decreased substantially for patients treated to the left breast. Ipsilateral lung and contralateral breast doses decreased for all patients. For one patient treated to

  2. Absolute geostrophic currents over the SR02 section south of Africa in December 2009

    Science.gov (United States)

    Tarakanov, Roman

    2017-04-01

    The structure of the absolute geostrophic currents is investigated on the basis of CTD-, SADCP- and LADCP-data over the hydrographic section occupied south of Africa from the Good Hope Cape to 57° S along the Prime Meridian, and on the basis of satellite data on absolute dynamic topography (ADT) produced by Ssalto/Duacs and distributed by Aviso, with a support from Cnes (http://www.aviso.altimetry.fr/duacs/). Thus the section crossed the subtropical zone (at the junction of the subtropical gyres of the Indian and Atlantic oceans), the Antarctic Circumpolar Current (ACC) and terminated at the northern periphery of the Weddell Gyre. A total of 87 stations were occupied here with CTD-, and LADCP-profiling in the entire water column. The distance between stations was 20 nautical miles. Absolute geostrophic currents were calculated between each pair of CTD-stations with barotropic correction based on two methods: by SADCP data and by ADT at these stations. The subtropical part of the section crossed a large segment of the Agulhas meander, already separated from the current and disintegrating into individual eddies. In addition, smaller formed cyclones and anticyclones of the Agulhas Current were also observed in this zone. These structural elements of the upper layer of the ocean currents do not penetrate deeper than 1000-1500 m. Oppositely directed barotropic currents with velocities up to 30 cm/s were observed below these depths extending to the ocean bottom. Such large velocities agree well with the data of the bottom tracking of Lowered ADCP. Only these data were the reliable results of LADCP measurements because of the high transparency of the deep waters of the subtropical zone. The total transport of absolute geostrophic currents in the section is estimated as 144 and 179 Sv to the east, based on the SADCP and ADT barotropic correction, respectively. A transport of 4 (2) Sv to the east was observed on the northern periphery of the Weddell Gyre, 187 (182) Sv to

  3. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  4. Arctic-Mid-Latitude Linkages in a Nonlinear Quasi-Geostrophic Atmospheric Model

    Directory of Open Access Journals (Sweden)

    Dörthe Handorf

    2017-01-01

    Full Text Available A quasi-geostrophic three-level T63 model of the wintertime atmospheric circulation of the Northern Hemisphere has been applied to investigate the impact of Arctic amplification (increase in surface air temperatures and loss of Arctic sea ice during the last 15 years on the mid-latitude large-scale atmospheric circulation. The model demonstrates a mid-latitude response to an Arctic diabatic heating anomaly. A clear shift towards a negative phase of the Arctic Oscillation (AO− during low sea-ice-cover conditions occurs, connected with weakening of mid-latitude westerlies over the Atlantic and colder winters over Northern Eurasia. Compared to reanalysis data, there is no clear model response with respect to the Pacific Ocean and North America.

  5. Accurate representation of geostrophic and hydrostatic balance in unstructured mesh finite element ocean modelling

    Science.gov (United States)

    Maddison, J. R.; Marshall, D. P.; Pain, C. C.; Piggott, M. D.

    Accurate representation of geostrophic and hydrostatic balance is an essential requirement for numerical modelling of geophysical flows. Potentially, unstructured mesh numerical methods offer significant benefits over conventional structured meshes, including the ability to conform to arbitrary bounding topography in a natural manner and the ability to apply dynamic mesh adaptivity. However, there is a need to develop robust schemes with accurate representation of physical balance on arbitrary unstructured meshes. We discuss the origin of physical balance errors in a finite element discretisation of the Navier-Stokes equations using the fractional timestep pressure projection method. By considering the Helmholtz decomposition of forcing terms in the momentum equation, it is shown that the components of the buoyancy and Coriolis accelerations that project onto the non-divergent velocity tendency are the small residuals between two terms of comparable magnitude. Hence there is a potential for significant injection of imbalance by a numerical method that does not compute these residuals accurately. This observation is used to motivate a balanced pressure decomposition method whereby an additional "balanced pressure" field, associated with buoyancy and Coriolis accelerations, is solved for at increased accuracy and used to precondition the solution for the dynamical pressure. The utility of this approach is quantified in a fully non-linear system in exact geostrophic balance. The approach is further tested via quantitative comparison of unstructured mesh simulations of the thermally driven rotating annulus against laboratory data. Using a piecewise linear discretisation for velocity and pressure (a stabilised P1P1 discretisation), it is demonstrated that the balanced pressure decomposition method is required for a physically realistic representation of the system.

  6. Mapping sub-surface geostrophic currents from altimetry and a fleet of gliders

    Science.gov (United States)

    Alvarez, A.; Chiggiato, J.; Schroeder, K.

    2013-04-01

    Integrating the observations gathered by different platforms into a unique physical picture of the environment is a fundamental aspect of networked ocean observing systems. These are constituted by a spatially distributed set of sensors and platforms that simultaneously monitor a given ocean region. Remote sensing from satellites is an integral part of present ocean observing systems. Due to their autonomy, mobility and controllability, underwater gliders are envisioned to play a significant role in the development of networked ocean observatories. Exploiting synergism between remote sensing and underwater gliders is expected to result on a better characterization of the marine environment than using these observational sources individually. This study investigates a methodology to estimate the three dimensional distribution of geostrophic currents resulting from merging satellite altimetry and in situ samples gathered by a fleet of Slocum gliders. Specifically, the approach computes the volumetric or three dimensional distribution of absolute dynamic height (ADH) that minimizes the total energy of the system while being close to in situ observations and matching the absolute dynamic topography (ADT) observed from satellite at the sea surface. A three dimensional finite element technique is employed to solve the minimization problem. The methodology is validated making use of the dataset collected during the field experiment called Rapid Environmental Picture-2010 (REP-10) carried out by the NATO Undersea Research Center-NURC during August 2010. A marine region off-shore La Spezia (northwest coast of Italy) was sampled by a fleet of three coastal Slocum gliders. Results indicate that the geostrophic current field estimated from gliders and altimetry significantly improves the estimates obtained using only the data gathered by the glider fleet.

  7. Radial and tangential friction in heavy ion strongly damped collisions

    International Nuclear Information System (INIS)

    Jain, A.K.; Sarma, N.

    1979-01-01

    Deeply inelastic heavy ion collisions have been successfully described in terms of a nucleon exchange mechanism between two nucleon clouds. This model has also predicted the large angular momentum that is induced in the colliding nuclei. However computations were simplified in the earlier work by assuming that the friction was perturbation on the elastic scattering trajectory. Results of a more rigorous calculation are reported and the effect of modification of the trajectory on the energy transfer, the angular momentum induced and on the ratio of the radial to the tangential friction coefficients is reported. (auth.)

  8. Teaching the Pursuit of Assumptions

    Science.gov (United States)

    Gardner, Peter; Johnson, Stephen

    2015-01-01

    Within the school of thought known as Critical Thinking, identifying or finding missing assumptions is viewed as one of the principal thinking skills. Within the new subject in schools and colleges, usually called Critical Thinking, the skill of finding missing assumptions is similarly prominent, as it is in that subject's public examinations. In…

  9. Tandem collimators for the JET tangential gamma-ray spectrometer

    International Nuclear Information System (INIS)

    Soare, Sorin; Balshaw, Nick; Blanchard, Patrick; Craciunescu, Teddy; Croft, David; Curuia, Marian; Edlington, Trevor; Kiptily, Vasily; Murari, Andrea; Prior, Phil; Sanders, Steven; Syme, Brian; Zoita, Vasile

    2011-01-01

    The tangential gamma-ray spectrometer (TGRS) of the JET tokamak fusion facility is an important diagnostics for investigating the fast particle evolution. A well defined field of view for the TGRS diagnostics is essential for its proper operation and this is to be determined by a rather complex system of collimators and shields both for the neutron and gamma radiations. A conceptual design for this system has been carried out with the main design target set to maximize the signal-to-background ratio at the spectrometer detector, the ratio being defined in terms of the plasma emitted gamma radiation and the gamma-ray background. As a first phase of the TGRS diagnostics upgrade a set of two tandem collimators has been designed with the aim of determining a quasi-tangential field of view through JET tokamak plasmas. A modular design of the tandem system has been developed in order to allow for the construction of different configurations for deuterium and deuterium-tritium discharges. The internal structure of the collimators consists of nuclear grade lead and high density polyethylene slabs arranged in an optimized pattern. The performance of a simplified geometry of the tandem collimator configuration has been evaluated by neutron and photon transport calculations and the numerical results show that the design parameters can be attained.

  10. Tangential stretching rate (TSR) analysis of non premixed reactive flows

    KAUST Repository

    Valorani, Mauro

    2016-10-16

    We discuss how the Tangential stretching rate (TSR) analysis, originally developed and tested for spatially homogeneous systems (batch reactors), is extended to spatially non homogeneous systems. To illustrate the effectiveness of the TSR diagnostics, we study the ignition transient in a non premixed, reaction–diffusion model in the mixture fraction space, whose dependent variables are temperature and mixture composition. The reactive mixture considered is syngas/air. A detailed H2/CO mechanism with 12 species and 33 chemical reactions is employed. We will discuss two cases, one involving only kinetics as a model of front propagation purely driven by spontaneous ignition, the other as a model of deflagration wave involving kinetics/diffusion coupling. We explore different aspects of the system dynamics such as the relative role of diffusion and kinetics, the evolution of kinetic eigenvalues, and of the tangential stretching rates computed by accounting for the combined action of diffusion and kinetics as well for kinetics only. We propose criteria based on the TSR concept which allow to identify the most ignitable conditions and to discriminate between spontaneous ignition and deflagration front.

  11. Do unreal assumptions pervert behaviour?

    DEFF Research Database (Denmark)

    Petersen, Verner C.

    After conducting a series of experiments involving economics students Miller concludes: "The experience of taking a course in microeconomics actually altered students' conceptions of the appropriateness of acting in a self-interested manner, not merely their definition of self-interest." Being...... become taken for granted and tacitly included into theories and models of management. Guiding business and manage¬ment to behave in a fashion that apparently makes these assumptions become "true". Thus in fact making theories and models become self-fulfilling prophecies. The paper elucidates some...... of the basic assumptions underlying the theories found in economics. Assumptions relating to the primacy of self-interest, to resourceful, evaluative, maximising models of man, to incentive systems and to agency theory. The major part of the paper then discusses how these assumptions and theories may pervert...

  12. Intensity-modulated tangential beam irradiation of the intact breast

    International Nuclear Information System (INIS)

    Hong, L.; Hunt, M.; Chui, C.; Spirou, S.; Forster, K.; Lee, H.; Yahalom, J.; Kutcher, G.J.; McCormick, B.

    1999-01-01

    Purpose: To evaluate the potential benefits of intensity modulated tangential beams in the irradiation of the intact breast. Methods and Materials: Three-dimensional treatment planning was performed on five left and five right breasts using standard wedged and intensity modulated (IM) tangential beams. Optimal beam parameters were chosen using beams-eye-view display. For the standard plans, the optimal wedge angles were chosen based on dose distributions in the central plane calculated without inhomogeneity corrections, according to our standard protocol. Intensity-modulated plans were generated using an inverse planning algorithm and a standard set of target and critical structure optimization criteria. Plans were compared using multiple dose distributions and dose volume histograms for the planning target volume (PTV), ipsilateral lung, coronary arteries, and contralateral breast. Results: Significant improvements in the doses to critical structures were achieved using intensity modulation. Compared with a standard-wedged plan prescribed to 46 Gy, the dose from the IM plan encompassing 20% of the coronary artery region decreased by 25% (from 36 to 27 Gy) for patients treated to the left breast; the mean dose to the contralateral breast decreased by 42% (from 1.2 to 0.7 Gy); the ipsilateral lung volume receiving more than 46 Gy decreased by 30% (from 10% to 7%); the volume of surrounding soft tissue receiving more than 46 Gy decreased by 31% (from 48% to 33%). Dose homogeneity within the target volume improved greatest in the superior and inferior regions of the breast (approximately 8%), although some decrease in the medial and lateral high-dose regions (approximately 4%) was also observed. Conclusion: Intensity modulation with a standard tangential beam arrangement significantly reduces the dose to the coronary arteries, ipsilateral lung, contralateral breast, and surrounding soft tissues. Improvements in dose homogeneity throughout the target volume can also be

  13. Tangential filtration technologies membrane and applications for the industry agribusiness

    International Nuclear Information System (INIS)

    Leone, Gian Paolo; Russo, Claudio

    2015-01-01

    The membrane tangential filtration technologies are separation techniques based on the use of semipermeable filters through which, under a pushing force, it is possible to achieve separation of components or suspended in solution as a function of their dimensional characteristics and / or chemical-physical. At the laboratories of the ENEA Research Center Casaccia, as part of the program activities of the Biotechnology and agro-industry division, were studied and developed various filtration processes to membrane in the food industry. The problems have been studied by following a vision sustainable overall, always trying to pair the purification treatment to that of recovery and reuse of water and high value-added components. Ultimate goal of the research conducted is to close the production circuit, ensuring a discharge cycle zero and turning in fact a so-called spread in first, from which to obtain new products. [it

  14. Euclidean Submanifolds via Tangential Components of Their Position Vector Fields

    Directory of Open Access Journals (Sweden)

    Bang-Yen Chen

    2017-10-01

    Full Text Available The position vector field is the most elementary and natural geometric object on a Euclidean submanifold. The position vector field plays important roles in physics, in particular in mechanics. For instance, in any equation of motion, the position vector x (t is usually the most sought-after quantity because the position vector field defines the motion of a particle (i.e., a point mass: its location relative to a given coordinate system at some time variable t. This article is a survey article. The purpose of this article is to survey recent results of Euclidean submanifolds associated with the tangential components of their position vector fields. In the last section, we present some interactions between torqued vector fields and Ricci solitons.

  15. Self-similar spherical metrics with tangential pressure

    CERN Document Server

    Gair, J R

    2002-01-01

    A family of spherically symmetric spacetimes is discussed, which have anisotropic pressure and possess a homothetic Killing vector. The spacetimes are composed of dust with a tangential pressure provided by angular momentum of the dust particles. The solution is given implicitly by an elliptic integral and depends on four arbitrary functions. These represent the initial configurations of angular momentum, mass, energy and position of the shells. The solution is derived by imposing self-similarity in the coordinates R, the shell label, and tau, the proper time experienced by the dust. Conditions for evolution without shell crossing and a description of singularity formation are given and types of solution discussed. General properties of the solutions are illustrated by reference to a particular case, which represents a universe that exists for an infinite time, but in which every shell expands and recollapses in a finite time.

  16. The Versajet water dissector: a new tool for tangential excision.

    Science.gov (United States)

    Klein, Matthew B; Hunter, Sue; Heimbach, David M; Engrav, Loren H; Honari, Shari; Gallery, Ellen; Kiriluk, Diane-Marie; Gibran, Nicole S

    2005-01-01

    Goulian and Watson knives work well for tangential burn excision on large flat areas. They do not work well in small areas and in areas with a three-dimensional structure. The Versajet Hydrosurgery System (Smith and Nephew, Key Largo, FL) is a new waterjet-powered surgical tool designed for wound excision. The small size of the cutting nozzle and the ability to easily maneuver the water dissector into small spaces makes it a potentially useful tool for excision of burns of the eyelids, digits and web spaces. The Versajet Hydrosurgery System contains a power console that propels saline through a handheld cutting device. This stream of pressurized saline functions as a knife. We have used the Versajet for burn excision in 44 patients. Although there is a learning curve for both surgeons using and operating room staff setting up the device, the Versajet provides a relatively facile method for excision of challenging aesthetic and functional areas.

  17. Subcritical thermal convection of liquid metals in a rotating sphere using a quasi-geostrophic model

    Science.gov (United States)

    Cardin, P.; Guervilly, C.

    2016-12-01

    We study non-linear convection in a rapidly rotating sphere with internal heating for values of the Prandtl number relevant for liquid metals (10-2-1). We use a numerical model based on the quasi-geostrophic approximation, in which variations of the axial vorticity along the rotation axis are neglected, whereas the temperature field is fully three-dimensional. We identify two separate branches of convection close to onset: (i) a well-known weak branch for Ekman numbers greater than 10-6, which is continuous at the onset (supercritical bifurcation) and consists of the interaction of thermal Rossby waves, and (ii) a novel strong branch at lower Ekman numbers, which is discontinuous at the onset. The strong branch becomes subcritical for Ekman numbers of the order of 10-8. On the strong branch, the Reynolds number of the flow is greater than 1000, and a strong zonal flow with multiple jets develops, even close to the non-linear onset of convection. We find that the subcriticality is amplified by decreasing the Prandtl number. The two branches can co-exist for intermediate Ekman numbers, leading to hysteresis (E = 10-6, Pr =10-2). Non-linear oscillations are observed near the onset of convection for E = 10-7 and Pr = 10-1.

  18. Subcritical convection of liquid metals in a rotating sphere using a quasi-geostrophic model

    Science.gov (United States)

    Guervilly, Céline; Cardin, Philippe

    2016-12-01

    We study nonlinear convection in a rapidly rotating sphere with internal heating for values of the Prandtl number relevant for liquid metals ($Pr\\in[10^{-2},10^{-1}]$). We use a numerical model based on the quasi-geostrophic approximation, in which variations of the axial vorticity along the rotation axis are neglected, whereas the temperature field is fully three-dimensional. We identify two separate branches of convection close to onset: (i) a well-known weak branch for Ekman numbers greater than $10^{-6}$, which is continuous at the onset (supercritical bifurcation) and consists of thermal Rossby waves, and (ii) a novel strong branch at lower Ekman numbers, which is discontinuous at the onset. The strong branch becomes subcritical for Ekman numbers of the order of $10^{-8}$. On the strong branch, the Reynolds number of the flow is greater than $10^3$, and a strong zonal flow with multiple jets develops, even close to the nonlinear onset of convection. We find that the subcriticality is amplified by decreasing the Prandtl number. The two branches can co-exist for intermediate Ekman numbers, leading to hysteresis ($Ek=10^{-6}$, $Pr=10^{-2}$). Nonlinear oscillations are observed near the onset of convection for $Ek=10^{-7}$ and $Pr=10^{-1}$.

  19. Convergence of Extreme Value Statistics in a Two-Layer Quasi-Geostrophic Atmospheric Model

    Directory of Open Access Journals (Sweden)

    Vera Melinda Gálfi

    2017-01-01

    Full Text Available We search for the signature of universal properties of extreme events, theoretically predicted for Axiom A flows, in a chaotic and high-dimensional dynamical system. We study the convergence of GEV (Generalized Extreme Value and GP (Generalized Pareto shape parameter estimates to the theoretical value, which is expressed in terms of the partial information dimensions of the attractor. We consider a two-layer quasi-geostrophic atmospheric model of the mid-latitudes, adopt two levels of forcing, and analyse the extremes of different types of physical observables (local energy, zonally averaged energy, and globally averaged energy. We find good agreement in the shape parameter estimates with the theory only in the case of more intense forcing, corresponding to a strong chaotic behaviour, for some observables (the local energy at every latitude. Due to the limited (though very large data size and to the presence of serial correlations, it is difficult to obtain robust statistics of extremes in the case of the other observables. In the case of weak forcing, which leads to weaker chaotic conditions with regime behaviour, we find, unsurprisingly, worse agreement with the theory developed for Axiom A flows.

  20. Energy and enstrophy spectra of geostrophic turbulent flows derived from a maximum entropy principle

    Science.gov (United States)

    Verkley, W. T. M.; Lynch, P.

    2009-04-01

    The principle of maximum entropy is used to obtain energy and enstrophy spectra as well as average vorticity fields in the context of geostrophic turbulence on a rotating sphere. In the unforced-undamped (inviscid) case the maximization of entropy is constrained by the (constant) energy and enstrophy of the system, leading to the familiar results of absolute statistical equilibrium. In the damped (freely decaying) and forced-damped case the maximization of entropy is constrained by either the decay rates of energy and enstrophy or by the energy and enstrophy in combination with their decay rates. Integrations with a numerical spectral model are used to check the theoretical results for the different cases. Maximizing the entropy, constrained by the energy and enstrophy, gives a very good description of the energy and enstrophy spectra in the inviscid case, in accordance with known results. Maximizing the entropy, constrained by the energy and enstrophy in combination with their decay rates, also leads to very good descriptions of the energy and enstrophy spectra in the freely decaying case, not too long after the damping has set in. In the forced-damped case, maximizing the entropy with the energy and enstrophy in combination with their (zero) decay rates as constraints, gives a reasonable description of the spectra although discrepancies remain.

  1. Absolute Geostrophic Velocity Inverted from World Ocean Atlas 2013 (WOAV13) with the P-Vector Method

    Science.gov (United States)

    2015-11-01

    modular ocean model (MOM). Journal of Oceanography, 54, 185-198. Chu PC, Lan J, Fan CW. 2001a. Japan Sea circulation and thermohaline structure...Part 1 Climatology. Journal of Physical Oceanography, 31, 244-271. Chu PC, Lan J, Fan CW. 2001b. Japan Sea circulation and thermohaline structure...geostrophic velocity, representing the large-scale ocean circulation , is calculated from the WOA13 (T, S) data using the P-vector inverse method (Chu

  2. Synoptic Monthly Gridded WOD Absolute Geostrophic Velocity (SMG-WOD-V) (January 1945 - December 2014) with the P-Vector Method (NCEI Accession 0146195)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The SMG-WOD-V dataset comprises synoptic monthly global gridded fields of absolute geostrophic velocity inverted from the synoptic monthly gridded WOD temperature...

  3. Absolute Geostrophic Velocity Inverted from the Polar Science Center Hydrographic Climatology (PHC3.0) of the Arctic Ocean with the P-Vector Method (NCEI Accession 0156425)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset (called PHC-V) comprises 3D gridded climatological fields of absolute geostrophic velocity of the Arctic Ocean inverted from the Polar science center...

  4. The Axioms and Special Assumptions

    Science.gov (United States)

    Borchers, Hans-Jürgen; Sen, Rathindra Nath

    For ease of reference, the axioms, the nontriviality assumptions (3.1.10), the definition of a D-set and the special assumptions of Chaps. 5 and 6 are collected together in the following. The verbal explanations that follow the formal definitions a)-f) of (4.2.1) have been omitted. The entries below are numbered as they are in the text. Recall that βC is the subset of the cone C which, in a D-set, is seen to coincide with the boundary of C after the topology is introduced (Sects. 3.2 and 3.2.1).

  5. Larval Fish Assemblages and Geostrophic Circulation in Bahía de La Paz and the Surrounding Southwestern Region of the Gulf of California.

    Science.gov (United States)

    Sanchez-Velasco, L. M.; Beier, E.; Lavín, M. F.; Avalos, C.

    2007-05-01

    Using hydrographic data and zooplankton samples from four oceanographic cruises during May, July and October 2001 and February 2002, we analyze the relationship between larval fish assemblages and geostrophic flows in Bahía de La Paz and the neighbouring Southwest Gulf of California. The analysis of fish larvae distribution linked to geostrophic circulation is an innovative interdisciplinary approach for the understanding of fish larvae ecology. The Bray-Curtis Dissimilarity Index defined two types of larval fish assemblages with seasonal variations: 1) The Coastal assemblage which was dominated by epipelagic coastal species like Sardinops caeruleus, Ophistonema spp. and 2). The Oceanic assemblages: Oceanic and Transitional-Oceanic assemblages, both dominated by mesopelagic species like Vinciguerria lucetia, Diogenichthys laternatus, Benthosema panamense but with different relative larval abundance and frequency of occurrence. The seasonal variations of the assemblages inside Bahía de La Paz appear to be related to the water exchanges between the bay and the Gulf of California. During July and October the geostrophic flow through the North Mouth is strong and as consequence, the Transitional-Oceanic assemblage cover practically the whole bay, while during February and May, when the geostrophic transport through the North Mouth is weak, the Coastal assemblage spreads over the whole bay. Results show that the spatial distribution of larval fish assemblages during the sampled periods is linked with the patterns of geostrophic circulation. This suggests that large amounts of fish larvae are transported by geostrophic flow, rather than by the total currents (Ekman plus geostrophic). The cyclonic geostrophic pattern of circulation during the warm period was also detected with direct observations of velocities from June to October of 2004.

  6. Fatigue strength of α-titanium alloys under combined alternating normal and tangential stresses

    International Nuclear Information System (INIS)

    Shamanin, Yu.A.

    1984-01-01

    Results are presented on the study of fatigue strength for smooth specimens of α-titanium Ti-5Al-2.5V alloy under combined loading by normal and tangential stresses. The experimental data are shown to be in good agreement with the criterion of the largest tangential stresses. Microscopic cracks propagate over areas of the maximum normal stresses

  7. Challenged assumptions and invisible effects

    DEFF Research Database (Denmark)

    Wimmelmann, Camilla Lawaetz; Vitus, Kathrine; Jervelund, Signe Smith

    2017-01-01

    of two complete intervention courses and an analysis of the official intervention documents. Findings – This case study exemplifies how the basic normative assumptions behind an immigrant-oriented intervention and the intrinsic power relations therein may be challenged and negotiated by the participants...

  8. Portfolios: Assumptions, Tensions, and Possibilities.

    Science.gov (United States)

    Tierney, Robert J.; Clark, Caroline; Fenner, Linda; Herter, Roberta J.; Simpson, Carolyn Staunton; Wiser, Bert

    1998-01-01

    Presents a discussion between two educators of the history, assumptions, tensions, and possibilities surrounding the use of portfolios in multiple classroom contexts. Includes illustrative commentaries that offer alternative perspectives from a range of other educators with differing backgrounds and interests in portfolios. (RS)

  9. Sampling Assumptions in Inductive Generalization

    Science.gov (United States)

    Navarro, Daniel J.; Dry, Matthew J.; Lee, Michael D.

    2012-01-01

    Inductive generalization, where people go beyond the data provided, is a basic cognitive capability, and it underpins theoretical accounts of learning, categorization, and decision making. To complete the inductive leap needed for generalization, people must make a key "sampling" assumption about how the available data were generated.…

  10. Seasonal variability and geostrophic circulation in the eastern Mediterranean as revealed through a repeated XBT transect

    Directory of Open Access Journals (Sweden)

    V. Zervakis

    Full Text Available The evolution of the upper thermocline on a section across the eastern Mediterranean was recorded bi-weekly through a series of XBT transects from Piraeus, Greece to Alexandria, Egypt, extending from October 1999 to October 2000 on board Voluntary Observing Ships in the framework of the Mediterranean Forecasting System Pilot Project. The data acquired provided valuable information on the seasonal variability of the upper ocean thermal structure at three different regions of the eastern Mediterranean: the Myrtoan, Cretan and Levantine Seas. Furthermore, the horizontal distance (~12 miles between successive profiles provides enough spatial resolution to analyze mesoscale features, while the temporal distance between successive expeditions (2–4 weeks allows us to study their evolution. Sub-basin scale features are identified using contemporaneous sea surface temperature satellite images. The cross-transect geostrophic velocity field and corresponding volume fluxes for several sub-basin scale features of the Levantine Sea are estimated by exploiting monthly q / S diagrams from operational runs of the Princeton Ocean Model in use at NCMR. A southwestward transport in the proximity of the southeast tip of Crete was estimated between 1–3 Sv. The transport increases after the winter formation of dense intermediate water in the Cretan Sea strengthens the pressure gradient across the Cretan Straits. The Mersah-Matruh anticyclone was identified as a closed gyre carrying about 2–6 Sv. This feature was stable throughout the stratified period and disappeared from our records in March 2000. Finally, our data reveal the existence of an eastward-flowing coastal current along the North African coast, transporting a minimum of 1–2 Sv.

    Key words. Oceanography: physical (eddies and mesoscale processes; currents; marginal and semi-closed seas

  11. Dosimetric improvements following 3D planning of tangential breast irradiation

    International Nuclear Information System (INIS)

    Aref, Amr; Thornton, Dale; Youssef, Emad; He, Tony; Tekyi-Mensah, Samuel; Denton, Lori; Ezzell, Gary

    2000-01-01

    Purpose: To evaluate the dosimetric difference between a simple radiation therapy plan utilizing a single contour and a more complex three-dimensional (3D) plan utilizing multiple contours, lung inhomogeneity correction, and dose-based compensators. Methods and Materials: This is a study of the radiation therapy (RT) plans of 85 patients with early breast cancer. All patients were considered for breast-conserving management and treated by conventional tangential fields technique. Two plans were generated for each patient. The first RT plan was based on a single contour taken at the central axis and utilized two wedges. The second RT plan was generated by using the 3D planning system to design dose-based compensators after lung inhomogeneity correction had been made. The endpoints of the study were the comparison between the volumes receiving greater than 105% and greater than 110% of the reference dose, as well as the magnitude of the treated volume maximum dose. Dosimetric improvement was defined to be of significant value if the volume receiving > 105% of one plan was reduced by at least 50% with the absolute difference between the volumes being 5% or greater. The dosimetric improvements in 49 3D plans (58%) were considered of significant value. Patients' field separation and breast size did not predict the magnitude of improvement in dosimetry. Conclusion: Dose-based compensator plans significantly reduced the volumes receiving > 105%, >110%, and volume maximum dose.

  12. Generation of a rotating liquid liner by tangential injection

    International Nuclear Information System (INIS)

    Burton, R.L.; Turchi, P.J.; Jenkins, D.J.; Lanham, R.E.; Cameron, J.; Cooper, A.L.

    1979-01-01

    Efficient compression of low mass-density payloads by the implosion of higher mass-density liquid cylinders or liners, as in the NRL LINUS concept for controlled thermonuclear fusion, requires rotation of the liner material to avoid Rayleigh--Taylor instabilities at the liner-payload interface. Experimentally, such implosions have been demonstrated with liners formed within rotating implosion chambers. The present work uses a scale-model experimental apparatus to investigate the possibility of creating liner rotation by tangential injection of the liquid liner material. Different modes of behavior are obtained depending on the fluid exhaust procedures. Right-circular, cylindrical free surfaces are achieved with axial exhaust of fluid at radii interior to the injection nozzles, for which the liner exhibits a combination of solid-body and free vortex flows in different regions. Measurements allow estimates of power losses to viscous shear, turbulence, etc. A simple model based on open-channel flow is then derived, which is in good agreement with experiment, and is used to extrapolate results to the scale of a possible LINUS fusion reactor

  13. Tangential Flow Filtration Technique: an Overview on Nanomedicine Applications.

    Science.gov (United States)

    Musumeci, Teresa; Leonardi, Antonio; Bonaccorso, Angela; Pignatello, Rosario; Puglisi, Giovanni

    2018-03-06

    Purification is a key step for different type of approaches, ranging from food, biotechnology to pharmaceutical fields. In biotechnology tangential flow filtration (TFF) allows to obtain the separation of different components of cells without instability phenomena. In food industry, TFF ensures the removal of contaminants or other substances that negatively affect visual appearance, organoleptic attributes, nutritional value and/or safety of aliments. Purification is an important and necessary step controlling the quality of final product also in the pharmaceutical area. In the field of research and development of nanomedicines, several techniques are used to purify and/or to concentrate the batches for in vitro and in vivo application. Despite many approaches exist; current data reveals continued unsatisfactory results. Between them, TFF showed promising results, even if, currently, its use is uncommon if compared with the other purification techniques usually reported in "materials and methods" sections. This review represents an overview of the different applications of TFF from protein purification to food application, with particular attention to the field of nanomedicine from polymeric to metallic nanoparticles, highlighting advantages and dis-advantages in the use of this technique. Theoretical aspect of the process has been examined. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption...

  15. Investigations of transonic buffet control on civil aircraft wing with the use of tangential jet blowing

    Science.gov (United States)

    Abramova, K. A.; Petrov, A. V.; Potapchick, A. V.; Soudakov, V. G.

    2016-10-01

    Numerical and experimental investigations of transonic buffet control by tangential jet blowing are presented. To suppress the shock-induced boundary layer separation and the buffet at transonic speeds, compressed air jet is blown through a small slot nozzle tangentially to the upper surface of the supercritical airfoil. Numerical simulations were carried out on the basis of the unsteady Reynolds averaged Navier-Stokes (URANS) equations. Experimental studies of the tangential jet blowing were performed in the transonic wind tunnel T-112 of TsAGI. Results show that the jet moves the shock downstream, increases lift, suppresses flow separation under shock foot and delays buffet onset.

  16. An improved Abel inversion method modified for tangential interferometry in tokamak

    International Nuclear Information System (INIS)

    Ha, J.H.; Nam, Y.U.; Cheon, M.S.; Hwang, Y.S.

    2004-01-01

    An improved Abel inversion technique has been developed for an accurate reconstruction of the electron density profile in the tangential interferometer system. A conventional slice-and-stack method has been modified in various ways for tangential interferometer data, and their results are compared with various density profiles. Among them, an improved inversion technique of double linear density method shows good reconstructions of all those density profiles even with measurement errors accounted. Especially, it provides better-reconstructed profiles at the edge. This technique has been successfully applied to KSTAR (Korea superconducting tokamak advanced research) tokamak for the design of the KSTAR tangential interferometer system

  17. The three-dimensional distributions of tangential velocity and total- temperature in vortex tubes

    DEFF Research Database (Denmark)

    Linderstrøm-Lang, C.U.

    1971-01-01

    The axial and radial gradients of the tangential velocity distribution are calculated from prescribed secondary flow functions on the basis of a zero-order approximation to the momentum equations developed by Lewellen. It is shown that secondary flow functions may be devised which meet pertinent...... physical requirements and which at the same time lead to realistic tangential velocity gradients. The total-temperature distribution in both the axial and radial directions is calculated from such secondary flow functions and corresponding tangential velocity results on the basis of an approximate...

  18. Assessment of cardiac exposure in left-tangential breast irradiation

    International Nuclear Information System (INIS)

    Vees, H.; Bigler, R.; Gruber, G.; Bieri, S.

    2011-01-01

    Purpose. - To assess the value of treatment-planning related parameters namely, the breast volume; the distance of the inferior field border to diaphragm; and the cardio-thoracic ratio for left-tangential breast irradiation. Patients and methods. - Treatment plans of 27 consecutively left-sided breast cancer patients after breast conserving surgery were evaluated for several parameters concerning heart-irradiation. We measured the heart distance respective to the cardio-thoracic ratio and the distance of the inferior field border to diaphragm, as well as the breast volume in correlation with the irradiated heart volume. Results. - The mean heart and left breast volumes were 504 cm 3 and 672.8 cm 3 , respectively. The mean heart diameter was 13.4 cm; the mean cardio-thoracic ratio 0.51 and the mean distance of the inferior field border to diaphragm was 1.4 cm. Cardio-thoracic ratio (p = 0.01), breast volume (p = 0.0002), distance of the inferior field border to diaphragm (p = 0.02) and central lung distance (p = 0.02) were significantly correlated with the measured heart distance. A significant correlation was also found between cardio-thoracic ratio, breast volume and distance of the inferior field border to diaphragm with the irradiated heart volume measured by V10, V20 and V40. Conclusion. - The verification of parameters like cardio-thoracic ratio, distance of the inferior field border to diaphragm and breast volume in left-sided breast cancer patients may help in determining which patients could benefit from more complex planning techniques such as intensity-modulated radiotherapy to reduced risk of late cardiac injury. (authors)

  19. Operation of a tangential bolometer on the PBX tokamak

    International Nuclear Information System (INIS)

    Paul, S.F.; Fonck, R.J.; Schmidt, G.L.

    1987-04-01

    A compact 15-channel bolometer array that views plasma emission tangentially across the midplane has been installed on the PBX tokamak to supplement a 19-channel poloidal array which views the plasma perpendicular to the toroidal direction. By comparing measurements from these arrays, poloidal asymmetries in the emission profile can be assessed. The detector array consists of 15 discrete 2-mm x 2-mm Thinistors, a mixed semiconductor material whose temperature coefficient of resistance is relatively high. The accumulated heat incident on a detector gives rise to a change in the resistance in each active element. Operated in tandem with an identical blind detector, the resistance in each pair is compared in a Wheatstone bridge circuit. The variation in voltage resulting from the change in resistance is amplified, stored on a CAMAC transient recorder during the plasma discharge, and transferred to a VAX data acquisition computer. The instantaneous power is obtained by digitally smoothing and differentiating the signals in time, with suitable compensation for the cooling of the detector over the course of a plasma discharge. The detectors are ''free standing,'' i.e., they are supported only by their electrical leads. Having no substrate in contact with the detector reduces the response time and increases the time it takes for the detector to dissipate its accumulated heat, reducing the compensation for cooling required in the data analysis. The detectors were absolutely calibrated with a tungsten-halogen filament lamp and were found to vary by +-3%. The irradiance profiles are inverted to reveal the radially resolved emitted power density from the plasma, which is typically in the 0.1 to 0.5 W/cm 3 range

  20. Bronchiolitis obliterans organizing pneumonia after tangential beam irradiation to the breast. Discrimination from radiation pneumonitis

    International Nuclear Information System (INIS)

    Nambu, Atsushi; Ozawa, Katsura; Kanazawa, Masaki; Miyata, Kazuyuki; Araki, Tsutomu; Ohki, Zennosuke

    2002-01-01

    We report a case of bronchiolitis obliterans organizing pneumonia (BOOP) secondary to tangential beam irradiation to the breast, which occurred seven months after the completion of radiotherapy. Although radiation pneumonitis is an alternative consideration, BOOP could be differentiated from it by its relatively late onset and extensive distribution, which did not respect the radiation field. This disease should always be kept in mind in patients with a history of tangential beam irradiation to the breast. (author)

  1. Multiple zonal jets and convective heat transport barriers in a quasi-geostrophic model of planetary cores

    Science.gov (United States)

    Guervilly, C.; Cardin, P.

    2017-10-01

    We study rapidly rotating Boussinesq convection driven by internal heating in a full sphere. We use a numerical model based on the quasi-geostrophic approximation for the velocity field, whereas the temperature field is 3-D. This approximation allows us to perform simulations for Ekman numbers down to 10-8, Prandtl numbers relevant for liquid metals (˜10-1) and Reynolds numbers up to 3 × 104. Persistent zonal flows composed of multiple jets form as a result of the mixing of potential vorticity. For the largest Rayleigh numbers computed, the zonal velocity is larger than the convective velocity despite the presence of boundary friction. The convective structures and the zonal jets widen when the thermal forcing increases. Prograde and retrograde zonal jets are dynamically different: in the prograde jets (which correspond to weak potential vorticity gradients) the convection transports heat efficiently and the mean temperature tends to be homogenized; by contrast, in the cores of the retrograde jets (which correspond to steep gradients of potential vorticity) the dynamics is dominated by the propagation of Rossby waves, resulting in the formation of steep mean temperature gradients and the dominance of conduction in the heat transfer process. Consequently, in quasi-geostrophic systems, the width of the retrograde zonal jets controls the efficiency of the heat transfer.

  2. Large-Scale Flows and Magnetic Fields Produced by Rotating Convection in a Quasi-Geostrophic Model of Planetary Cores

    Science.gov (United States)

    Guervilly, C.; Cardin, P.

    2017-12-01

    Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.

  3. Dependence of the wind climate of Ireland on the direction distribution of geostrophic wind; Die Abhaengigkeit des Windklimas von Irland von der Richtungsverteilung des geostrophischen Windes

    Energy Technology Data Exchange (ETDEWEB)

    Frank, H.P. [Forskningcenter Risoe, Roskilde (Denmark). Afdelingen for Vindenergi og Atmosfaerefysik

    1998-01-01

    The wind climate of Ireland is calculated using the Karlsruhe Atmospheric Mesoscale Model KAMM. The dependence of the simulated wind energy on the direction distribution of geostrophic wind is studied. As geostrophic winds from the south-west are most frequent, sites on the north-west coast are particularly suited for wind power stations. In addition, geostrophic wind increases from the south-east to the north-west. (orig.) [Deutsch] Das Windklima von Irland wurde mit dem Karlsruher Atmosphaerischen Mesoskaligen Modell KAMM berechnet. Hier wird die Abhaengigkeit der simultierten Windenergie von der Richtungsverteilung des geostrophischen Windes untersucht. Da geostrophische Winde aus Suedwest am haeufigsten vorkommen, eignet sich besonders die Nordwestkueste als Standort fuer Windkraftanlagen. Zusaetzlich nimmt auch der mittlere geostrophische Wind von Suedost nach Nordwest zu. (orig.)

  4. Interactive and Mutual Information among low-frequency variability modes of a quasi-geostrophic model

    Science.gov (United States)

    Pires, Carlos; Perdigão, Rui

    2013-04-01

    We assess the Shannon multivariate mutual information (MI) and interaction information (IT), either on a simultaneous or on a time-lagged (up to 3 months) basis, between low-frequency modes of an atmospheric, T63, 3-level, perpetual winter forced, quasi-geostrophic model. For that purpose, Principal Components (PCs) of the spherical-harmonic components of the monthly-mean stream-functions are used. Every single PC time-series (of 1000 years length) is subjected to a prior Gaussian anamorphosis before computing MI and IT. That allows for unambiguously decomposing MI into the positive Gaussian (depending on the Gaussian correlation) and the non-Gaussian MI terms. We use a kernel-based MI estimator. Since marginal Gaussian PDFs are imposed, that makes MI estimation very robust even when using short data. Statistically significant non-Gaussian bivariate MI appears between the variance-dominating PC-pairs of larger space and time-scales with evidence in the bivariate PDF of the mixing of PDFs centered at different weather regimes. The corresponding residual Gaussian MI is due to PCs being uncorrelated and to the weak non-Gaussianity of monthly-based PCs. The Gaussianized PCs in the tail's variance spectrum (of faster variability) do not much differ from Gaussian independent white noises. Trivariate MI I(A,B,C) (also known as total correlation) is computed among simultaneous and time-lagged PCs: A,B,C as well as the interaction information: IT(A,B,C)=I(A,B|C)-I(A,B)=I(A,C|B)-I(A,C)=I(B,C|A)-I(B,C) along with their Gaussian and non-Gaussian counterparts where conditional MI is used. The corresponding non-Gaussian term allows for quantifying nonlinear predictability and causality. For example, we find interactive variable triads of positive non-Gaussian IT where A=X(t+tau), B=Y(t+tau), C=Z(t) where t is time, tau is time-lag and X,Y,Z are arbitrary PCs. Typically it works when X,Y are nearly independent while Z(t) is a mediator variable taking the role of a precursor

  5. Seasonal variability of upper-layer geostrophic transport in the tropical Indian Ocean during 1992-1996 along TOGA-I XBT tracklines

    Digital Repository Service at National Institute of Oceanography (India)

    Murty, V.S.N.; Sarma, M.S.S.; Lambata, B.P.; Gopalakrishna, V.V.; Pednekar, S.M.; Rao, A.S.; Luis, A.J.; Kaka, A.R.; Rao, L.V.G.

    the climatological monthly mean temperature and salinity pro"les (Levitus and Boyer, 1994) for 13]13grids. The upper ocean geostrophic currents between the XBT stations were derived and the seasonal variability of the upper-layer geostrophic transport of various... winter transition (September}November) seasons. XBT data were not available for April or August during the study period. The raw XBT data (export "les) with recorded temperature at 0.65 m depth intervals were linearly interpolated to 1 m depth intervals...

  6. Effects of Geostrophic Kinetic Energy on the Distribution of Mesopelagic Fish Larvae in the Southern Gulf of California in Summer/Fall Stratified Seasons.

    Science.gov (United States)

    Contreras-Catala, Fernando; Sánchez-Velasco, Laura; Beier, Emilio; Godínez, Victor M; Barton, Eric D; Santamaría-Del-Angel, Eduardo

    2016-01-01

    Effects of geostrophic kinetic energy flux on the three-dimensional distribution of fish larvae of mesopelagic species (Vinciguerria lucetia, Diogenichthys laternatus, Benthosema panamense and Triphoturus mexicanus) in the southern Gulf of California during summer and fall seasons of stronger stratification were analyzed. The greatest larval abundance was found at sampling stations in geostrophic kinetic energy-poor areas ( 21 J/m3), where mesoscale eddies were present, the larvae of the dominant species had low abundance and were spread more evenly through the water column, in spite of the water column stratification. For example, in a cyclonic eddy, V. lucetia larvae (34 larvae/10m2) extended their distribution to, at least, the limit of sampling 200 m depth below the pycnocline, while D. laternatus larvae (29 larvae/10m2) were found right up to the surface, both probably as a consequence mixing and secondary circulation in the eddy. Results showed that the level of the geostrophic kinetic energy flux affects the abundance and the three-dimensional distribution of mesopelagic fish larvae during the seasons of stronger stratification, indicating that areas with low geostrophic kinetic energy may be advantageous for feeding and development of mesopelagic fish larvae because of greater water column stability.

  7. Regional lung function impairment following post-operative radiotherapy for breast cancer using direct of tangential field techniques

    International Nuclear Information System (INIS)

    Groth, Steffen; Zaric, Aleksandra; Soerensen, P.B.; Larsen, Jytte; Soerensen, P.G.; Rossing, Niels

    1986-01-01

    The effect of tangential and direct irradiation on regional lung function in 22 consecutive patients with breast cancer, treated by post-operative irradiation 3 months prior to examination. The tangential technique (total dose 32-36 Gy, 99 Tcsup(m)-DTPA. Results were inconclusive, due to variable smoking habits. It is concluded that regional lung function was not significantly affected by the tangential technique, contrasting with a pronounced and harmful effect of the direct technique. (U.K.)

  8. Obstacle optimization for panic flow--reducing the tangential momentum increases the escape speed.

    Science.gov (United States)

    Jiang, Li; Li, Jingyu; Shen, Chao; Yang, Sicong; Han, Zhangang

    2014-01-01

    A disastrous form of pedestrian behavior is a stampede occurring in an event involving a large crowd in a panic situation. To deal with such stampedes, the possibility to increase the outflow by suitably placing a pillar or some other shaped obstacles in front of the exit has been demonstrated. We present a social force based genetic algorithm to optimize the best design of architectural entities to deal with large crowds. Unlike existing literature, our simulation results indicate that appropriately placing two pillars on both sides but not in front of the door can maximize the escape efficiency. Human experiments using 80 participants correspond well with the simulations. We observed a peculiar property named tangential momentum, the escape speed and the tangential momentum are found to be negatively correlated. The idea to reduce the tangential momentum has practical implications in crowd architectural design.

  9. Reduction of NOx emission in tangential fired - furnace by changing the, mode of operation

    International Nuclear Information System (INIS)

    Chudnovsky, B.; Talanker, A.; Levin, L.; Kahana, S

    1998-01-01

    The present work analyses tile results of tests on 575 MW units with tangential firing furnace arrangement in sub-stoichiometric combustion. Tangential firing provides good conditions for implementing sub-stoichiometric combustion owing to the delivery scheme of pulverized coal and air. The furnace was tested in several different modes of operation (Over Fire Air, Bunkers Out Of Service, Excess air, Tilt etc.) to achieve low cost NOx reduction. Actual performance data are presented based on experiments made on lEC's boiler in M.D. 'B' power station

  10. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption....... In this paper we investigate a method for testing the mar assumption in the presence of other distributional constraints. We present methods to (approximately) compute a test statistic consisting of the ratio of two profile likelihood functions. This requires the optimization of the likelihood under...... no assumptionson the missingness mechanism, for which we use our recently proposed AI \\& M algorithm. We present experimental results on synthetic data that show that our approximate test statistic is a good indicator for whether data is mar relative to the given distributional assumptions....

  11. How Symmetrical Assumptions Advance Strategic Management Research

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul; Hallberg, Hallberg

    2014-01-01

    We develop the case for symmetrical assumptions in strategic management theory. Assumptional symmetry obtains when assumptions made about certain actors and their interactions in one of the application domains of a theory are also made about this set of actors and their interactions in other...... application domains of the theory. We argue that assumptional symmetry leads to theoretical advancement by promoting the development of theory with greater falsifiability and stronger ontological grounding. Thus, strategic management theory may be advanced by systematically searching for asymmetrical...

  12. Separation of spermatozoa with a combination of pinched flow fraction and tangential filtration

    NARCIS (Netherlands)

    Berendsen, Johanna Theodora Wilhelmina; Eijkel, Jan C.T.; Segerink, Loes Irene

    2016-01-01

    We demonstrate a pinched flow tangential filtration method to sort spermatozoa from larger particles with a spermatozoa collection efficiency of 94±2% and a separation efficiency of 100%. In conventional pinched flow fractionation (PFF), an observed tumbling-like rotation of spermatozoa complicates

  13. Tangential Flow Filtration of Colloidal Silver Nanoparticles: A "Green" Laboratory Experiment for Chemistry and Engineering Students

    Science.gov (United States)

    Dorney, Kevin M.; Baker, Joshua D.; Edwards, Michelle L.; Kanel, Sushil R.; O'Malley, Matthew; Pavel Sizemore, Ioana E.

    2014-01-01

    Numerous nanoparticle (NP) fabrication methodologies employ "bottom-up" syntheses, which may result in heterogeneous mixtures of NPs or may require toxic capping agents to reduce NP polydispersity. Tangential flow filtration (TFF) is an alternative "green" technique for the purification, concentration, and size-selection of…

  14. Golgi analysis of tangential neurons in the lobula plate of Drosophila ...

    Indian Academy of Sciences (India)

    Unknown

    plate have been identified and characterized (Heisenberg et al 1978; Bulthoff and Buchner 1985; Bausenwein et ... provided by Heisenberg et al (1978) while studying op- tomotor-blindH31 (ombH31) mutation. ..... (c) A tangential element with three tufts of arbors (also see figure 3d). Two neurons of the twin vertical system ...

  15. Initial boundary-value problem for the spherically symmetric Einstein equations with fluids with tangential pressure.

    Science.gov (United States)

    Brito, Irene; Mena, Filipe C

    2017-08-01

    We prove that, for a given spherically symmetric fluid distribution with tangential pressure on an initial space-like hypersurface with a time-like boundary, there exists a unique, local in time solution to the Einstein equations in a neighbourhood of the boundary. As an application, we consider a particular elastic fluid interior matched to a vacuum exterior.

  16. Rationale and Application of Tangential Scanning to Industrial Inspection of Hardwood Logs

    Science.gov (United States)

    Nand K. Gupta; Daniel L. Schmoldt; Bruce Isaacson

    1998-01-01

    Industrial computed tomography (CT) inspection of hardwood logs has some unique requirements not found in other CT applications. Sawmill operations demand that large volumes of wood be scanned quickly at high spatial resolution for extended duty cycles. Current CT scanning geometries and commercial systems have both technical and economic [imitations. Tangential...

  17. Assumptions of Multiple Regression: Correcting Two Misconceptions

    Directory of Open Access Journals (Sweden)

    Matt N. Williams

    2013-09-01

    Full Text Available In 2002, an article entitled - Four assumptions of multiple regression that researchers should always test- by.Osborne and Waters was published in PARE. This article has gone on to be viewed more than 275,000 times.(as of August 2013, and it is one of the first results displayed in a Google search for - regression.assumptions- . While Osborne and Waters' efforts in raising awareness of the need to check assumptions.when using regression are laudable, we note that the original article contained at least two fairly important.misconceptions about the assumptions of multiple regression: Firstly, that multiple regression requires the.assumption of normally distributed variables; and secondly, that measurement errors necessarily cause.underestimation of simple regression coefficients. In this article, we clarify that multiple regression models.estimated using ordinary least squares require the assumption of normally distributed errors in order for.trustworthy inferences, at least in small samples, but not the assumption of normally distributed response or.predictor variables. Secondly, we point out that regression coefficients in simple regression models will be.biased (toward zero estimates of the relationships between variables of interest when measurement error is.uncorrelated across those variables, but that when correlated measurement error is present, regression.coefficients may be either upwardly or downwardly biased. We conclude with a brief corrected summary of.the assumptions of multiple regression when using ordinary least squares.

  18. Wrong assumptions in the financial crisis

    NARCIS (Netherlands)

    Aalbers, M.B.

    2009-01-01

    Purpose - The purpose of this paper is to show how some of the assumptions about the current financial crisis are wrong because they misunderstand what takes place in the mortgage market. Design/methodology/approach - The paper discusses four wrong assumptions: one related to regulation, one to

  19. Use of recent geoid models to estimate mean dynamic topography and geostrophic currents in South Atlantic and Brazil Malvinas confluence

    Directory of Open Access Journals (Sweden)

    Alexandre Bernardino Lopes

    2012-03-01

    Full Text Available The use of geoid models to estimate the Mean Dynamic Topography was stimulated with the launching of the GRACE satellite system, since its models present unprecedented precision and space-time resolution. In the present study, besides the DNSC08 mean sea level model, the following geoid models were used with the objective of computing the MDTs: EGM96, EIGEN-5C and EGM2008. In the method adopted, geostrophic currents for the South Atlantic were computed based on the MDTs. In this study it was found that the degree and order of the geoid models affect the determination of TDM and currents directly. The presence of noise in the MDT requires the use of efficient filtering techniques, such as the filter based on Singular Spectrum Analysis, which presents significant advantages in relation to conventional filters. Geostrophic currents resulting from geoid models were compared with the HYCOM hydrodynamic numerical model. In conclusion, results show that MDTs and respective geostrophic currents calculated with EIGEN-5C and EGM2008 models are similar to the results of the numerical model, especially regarding the main large scale features such as boundary currents and the retroflection at the Brazil-Malvinas Confluence.A utilização de modelos geoidais na determinação da Topografia Dinâmica Média foi impulsionada com o lançamento dos satélites do sistema GRACE, já que seus modelos apresentam precisão e resolução espacial e temporal sem precedentes. No presente trabalho, além do modelo de nível médio do mar DNSC08, foram utilizados os seguintes modelos geoidais com o objetivo de calcular as TDMs: EGM96, EIGEN-5C e EGM2008. No método adotado, foram calculadas as respectivas correntes geostróficas para o Atlântico Sul a partir das TDMs. O grau e ordem dos modelos geoidais influenciam diretamente na determinação da TDM e correntes. Neste trabalho verificou-se que presença de ruídos da TDM requer a utilização de técnicas de filtragem

  20. Linear regression and the normality assumption.

    Science.gov (United States)

    Schmidt, Amand F; Finan, Chris

    2017-12-16

    Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A Two-Layer Quasi-Geostrophic Model of Summer Trough Formation in the Australian Subtropical Easterlies.

    Science.gov (United States)

    Fandry, C. B.; Leslie, L. M.

    1984-03-01

    C.B. FandryCSIRO Division of Oceanography, G.P.O. Box 1538, Hobart, Tasmania, Australia 7001 L.M. LeslieAustralian Numerical Meteorology Research Centre, G.P.O. Box 5089AA, Melbourne, Victoria, Australia 3001A dominant feature of the low-level easterly wind flow in the Australian subtropics during summer is the trough development that occurs on both the western and eastern sides of the continent. This phenomenon is investigated analytically with a two-level model. A generalized solution is derived from the steady-state quasi-geostrophic equations governing uniform flow over arbitrarily shaped orography. The model solutions indicate that orography acting alone is of only marginal importance in producing the western trough. However, the high east coast orography is of significance in the formation of the eastern trough.Land-sea temperature contrast is parameterized in terms of equivalent orography with localized surface heating being mathematically equivalent to an orographic depression. The solutions including orography and surface heating acting together simulate well the lower layer flow over the Australian subtropics.

  2. Absolute Geostrophic Velocity Inverted from the Environmental Working Group (EWG) Joint U.S.-Russian Atlas of the Arctic Ocean with the P-Vector Method (NCEI Accession 0156424)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset (called EWG-V) comprises 3D gridded climatological fields of absolute geostrophic velocity inverted from the Environmental Working Group (EWG) Joint...

  3. Analysis of residual swirl in tangentially-fired natural gas-boiler

    International Nuclear Information System (INIS)

    Hasril Hasini; Muhammad Azlan Muad; Mohd Zamri Yusoff; Norshah Hafeez Shuaib

    2010-01-01

    This paper describes the investigation on residual swirl flow in a 120 MW natural gas, full-scale, tangential-fired boiler. Emphasis is given towards the understanding of the behavior of the combustion gas flow pattern and temperature distribution as a result of the tangential firing system of the boiler. The analysis was carried out based on three-dimensional computational modeling on full scale boiler with validation from key design parameter as well as practical observation. Actual operating parameters of the actual boiler are taken as the boundary conditions for this modeling. The prediction of total heat flux was found to be in agreement with the key design parameter while the residual swirl predicted at the upper furnace agrees qualitatively with the practical observation. Based on this comparison, detail analysis was carried out for comprehensive understanding on the generation and destruction of the residual swirl behavior in boiler especially those with high capacity. (author)

  4. The impact of airwave on tangential and normal components of electric field in seabed logging data

    Science.gov (United States)

    Rostami, Amir; Soleimani, Hassan; Yahya, Noorhana; Nyamasvisva, Tadiwa Elisha; Rauf, Muhammad

    2016-11-01

    Seabed Logging (SBL), is a recently used application of Controlled Source Electromagnetic (CSEM) method based on study on resistivity of layers beneath seafloor, to delineate marine hydrocarbon reservoir. In this method, an ultra-low frequency electromagnetic (EM) wave is emitted by an electric straight dipole which moves parallel to the seabed. Following Maxwell's equations, reflected and refracted waves from different layers are recorded by receiver line which is laying on the sea floor to define the contrast between amplitude and phase of responding waves of bearing oil reservoir and surrounding host rocks. The main concern of the current work is to study behavior of airwave, which is propagated wave in the seawater area, guided by sea surface and refracted back to the receiver line, and its impact on tangential and normal components of received electric field amplitude. Will be reported that the most significant part of tangential component is airwave, while it does not affect normal component of received electric field, remarkably.

  5. Modified maximum tangential stress criterion for fracture behavior of zirconia/veneer interfaces.

    Science.gov (United States)

    Mirsayar, M M; Park, P

    2016-06-01

    The veneering porcelain sintered on zirconia is widely used in dental prostheses, but repeated mechanical loadings may cause a fracture such as edge chipping or delamination. In order to predict the crack initiation angle and fracture toughness of zirconia/veneer bi-layered components subjected to mixed mode loadings, the accuracy of a new and traditional fracture criteria are investigated. A modified maximum tangential stress criterion considering the effect of T-stress and critical distance theory is introduced, and compared to three traditional fracture criteria. Comparisons to the recently published fracture test data show that the traditional fracture criteria are not able to properly predict the fracture initiation conditions in zirconia/veneer bi-material joints. The modified maximum tangential stress criterion provides more accurate predictions of the experimental results than the traditional fracture criteria. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Effect of tangential traction and roughness on crack initiation/propagation during rolling contact

    Science.gov (United States)

    Soda, N.; Yamamoto, T.

    1980-01-01

    Rolling fatigue tests of 0.45 percent carbon steel rollers were carried out using a four roller type rolling contact fatigue tester. Tangential traction and surface roughness of the harder mating rollers were varied and their effect was studied. The results indicate that the fatigue life decreases when fraction is applied in the same direction as that of rolling. When the direction of fraction is reversed, the life increases over that obtained with zero traction. The roughness of harder mating roller also has a marked influence on life. The smoother the mating roller, the longer the life. Microscopic observation of specimens revealed that the initiation of cracks during the early stages of life is more strongly influenced by the surface roughness, while the propagation of these cracks in the latter stages is affected mainly by the tangential traction.

  7. Wall Thickness Measurement Of Insulated Pipe By Tangential Radiography Technique Using Ir 192

    International Nuclear Information System (INIS)

    Soedarjo

    2000-01-01

    Insulation pipe wall thickness by tangential radiography technique has been carried out using 41 Curie Iridium 192 source has activity for two carbon steel pipes. The outer diameter of the first pipe is 90 mm, wall thickness is 75.0 mm, source film film distance is 609.5 mm, source tangential point of insulation is 489.5 mm and exposure time 3 minute and 25 second. From the calculation, the first pipe thickness is found to be 12.54 mm and for the second pipe is 8.42 mm. The thickness is due to inaccuracy in reading the pipe thickness on radiography film and the geometry distortion radiation path

  8. Combined tangential-normal vector elements for computing electric and magnetic fields

    International Nuclear Information System (INIS)

    Sachdev, S.; Cendes, Z.J.

    1993-01-01

    A direct method for computing electric and magnetic fields in two dimensions is developed. This method determines both the fields and fluxes directly from Maxwell's curl and divergence equations without introducing potential functions. This allows both the curl and the divergence of the field to be set independently in all elements. The technique is based on a new type of vector finite element that simultaneously interpolates to the tangential component of the electric or the magnetic field and the normal component of the electric or magnetic flux. Continuity conditions are imposed across element edges simply by setting like variables to be the same across element edges. This guarantees the continuity of the field and flux at the mid-point of each edge and that for all edges the average value of the tangential component of the field and of the normal component of the flux is identical

  9. Fuelling effect of tangential compact toroid injection in STOR-M Tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Onchi, T.; Liu, Y., E-mail: tao668@mail.usask.ca [Univ. of Saskatchewan, Dept. of Physics and Engineering Physics, Saskatoon, Saskatchewan (Canada); Dreval, M. [Univ. of Saskatchewan, Dept. of Physics and Engineering Physics, Saskatoon, Saskatchewan (Canada); Inst. of Plasma Physics NSC KIPT, Kharkov (Ukraine); McColl, D. [Univ. of Saskatchewan, Dept. of Physics and Engineering Physics, Saskatoon, Saskatchewan (Canada); Asai, T. [Inst. of Plasma Physics NSC KIPT, Kharkov (Ukraine); Wolfe, S. [Nihon Univ., Dept. of Physics, Tokyo (Japan); Xiao, C.; Hirose, A. [Univ. of Saskatchewan, Saskatoon, Saskatchewan (Canada)

    2012-07-01

    Compact torus injection (CTI) is the only known candidate for directly fuelling the core of a tokamak fusion reactor. Compact torus (CT) injection into the STOR-M tokamak has induced improved confinement accompanied by an increase in the electron density, reduction in Hα emission, and suppression of the saw-tooth oscillations. The measured change in the toroidal flow velocity following tangential CTI has demonstrated momentum injection into the STOR-M plasma. (author)

  10. Analysis of Burning Processes in Turbulent Mixing Axial and Tangential Flows

    Directory of Open Access Journals (Sweden)

    R. I. Essmann

    2009-01-01

    Full Text Available The paper demonstrates that in the case of turbulent diffusion flame tongues the burning process of combined multiphase fuel is determined by flow structure and conditions for mixing various types of fuel and distributed oxidizer flows. It has been determined that the ratio of air  supplied for burning through axial and tangential channels governs a shape of the flame tongue, its size and process intensity that allows efficiently to optimize technological parameters.

  11. Optimizing the flame aerodynamics and the design of tangentially arranged burners in a TGMP-314 boiler

    Science.gov (United States)

    Zroichikov, N. A.; Prokhorov, V. B.; Arkhipov, A. M.; Kirichkov, V. S.

    2011-08-01

    Technical solutions for optimizing the flame aerodynamics and the design of tangentially arranged burners in a TGMP-314 boiler are proposed. The implementation of these solutions will make it possible to achieve more reliable operation of the boiler during fuel oil combustion, smaller amount of NO x emissions during the combustion of gas and fuel oil, and a somewhat lower air excess factor in the furnace.

  12. Meningeal defects alter the tangential migration of cortical interneurons in Foxc1hith/hith mice

    Directory of Open Access Journals (Sweden)

    Zarbalis Konstantinos

    2012-01-01

    Full Text Available Abstract Background Tangential migration presents the primary mode of migration of cortical interneurons translocating into the cerebral cortex from subpallial domains. This migration takes place in multiple streams with the most superficial one located in the cortical marginal zone. While a number of forebrain-expressed molecules regulating this process have emerged, it remains unclear to what extent structures outside the brain, like the forebrain meninges, are involved. Results We studied a unique Foxc1 hypomorph mouse model (Foxc1hith/hith with meningeal defects and impaired tangential migration of cortical interneurons. We identified a territorial correlation between meningeal defects and disruption of interneuron migration along the adjacent marginal zone in these animals, suggesting that impaired meningeal integrity might be the primary cause for the observed migration defects. Moreover, we postulate that the meningeal factor regulating tangential migration that is affected in homozygote mutants is the chemokine Cxcl12. In addition, by using chromatin immunoprecipitation analysis, we provide evidence that the Cxcl12 gene is a direct transcriptional target of Foxc1 in the meninges. Further, we observe migration defects of a lesser degree in Cajal-Retzius cells migrating within the cortical marginal zone, indicating a less important role for Cxcl12 in their migration. Finally, the developmental migration defects observed in Foxc1hith/hith mutants do not lead to obvious differences in interneuron distribution in the adult if compared to control animals. Conclusions Our results suggest a critical role for the forebrain meninges to promote during development the tangential migration of cortical interneurons along the cortical marginal zone and Cxcl12 as the factor responsible for this property.

  13. Ammonia-methane combustion in tangential swirl burners for gas turbine power generation

    OpenAIRE

    Valera Medina, Agustin; Marsh, Richard; Runyon, Jon; Pugh, Daniel; Beasley, Paul; Hughes, Timothy Richard; Bowen, Philip John

    2017-01-01

    Ammonia has been proposed as a potential energy storage medium in the transition towards a low-carbon economy. This paper details experimental results and numerical calculations obtained to progress towards optimisation of fuel injection and fluidic stabilisation in swirl burners with ammonia as the primary fuel. A generic tangential swirl burner has been employed to determine flame stability and emissions produced at different equivalence ratios using ammonia–methane blends. Experiments were...

  14. GABA regulates the multidirectional tangential migration of GABAergic interneurons in living neonatal mice.

    Directory of Open Access Journals (Sweden)

    Hiroyuki Inada

    Full Text Available Cortical GABAergic interneurons originate from ganglionic eminences and tangentially migrate into the cortical plate at early developmental stages. To elucidate the characteristics of this migration of GABAergic interneurons in living animals, we established an experimental design specialized for in vivo time-lapse imaging of the neocortex of neonate mice with two-photon laser-scanning microscopy. In vesicular GABA/glycine transporter (VGAT-Venus transgenic mice from birth (P0 through P3, we observed multidirectional tangential migration of genetically-defined GABAergic interneurons in the neocortical marginal zone. The properties of this migration, such as the motility rate (distance/hr, the direction moved, and the proportion of migrating neurons to stationary neurons, did not change through P0 to P3, although the density of GABAergic neurons at the marginal zone decreased with age. Thus, the characteristics of the tangential motility of individual GABAergic neurons remained constant in development. Pharmacological block of GABA(A receptors and of the Na⁺-K⁺-Cl⁻ cotransporters, and chelating intracellular Ca²⁺, all significantly reduced the motility rate in vivo. The motility rate and GABA content within the cortex of neonatal VGAT-Venus transgenic mice were significantly greater than those of GAD67-GFP knock-in mice, suggesting that extracellular GABA concentration could facilitate the multidirectional tangential migration. Indeed, diazepam applied to GAD67-GFP mice increased the motility rate substantially. In an in vitro neocortical slice preparation, we confirmed that GABA induced a NKCC sensitive depolarization of GABAergic interneurons in VGAT-Venus mice at P0-P3. Thus, activation of GABA(AR by ambient GABA depolarizes GABAergic interneurons, leading to an acceleration of their multidirectional motility in vivo.

  15. Towards Noncommutative Topological Quantum Field Theory: Tangential Hodge-Witten cohomology

    International Nuclear Information System (INIS)

    Zois, I P

    2014-01-01

    Some years ago we initiated a program to define Noncommutative Topological Quantum Field Theory (see [1]). The motivation came both from physics and mathematics: On the one hand, as far as physics is concerned, following the well-known holography principle of 't Hooft (which in turn appears essentially as a generalisation of the Hawking formula for black hole entropy), quantum gravity should be a topological quantum field theory. On the other hand as far as mathematics is concerned, the motivation came from the idea to replace the moduli space of flat connections with the Gabai moduli space of codim-1 taut foliations for 3 dim manifolds. In most cases the later is finite and much better behaved and one might use it to define some version of Donaldson-Floer homology which, hopefully, would be easier to compute. The use of foliations brings noncommutative geometry techniques immediately into the game. The basic tools are two: Cyclic cohomology of the corresponding foliation C*-algebra and the so called ''tangential cohomology'' of the foliation. A necessary step towards this goal is to develop some sort of Hodge theory both for cyclic (and Hochschild) cohomology and for tangential cohomology. Here we present a method to develop a Hodge theory for tangential cohomology of foliations by mimicing Witten's approach to ordinary Morse theory by perturbations of the Laplacian

  16. The quality assessment of radial and tangential neutron radiography beamlines of TRR

    Science.gov (United States)

    Choopan Dastjerdi, M. H.; Movafeghi, A.; Khalafi, H.; Kasesaz, Y.

    2017-07-01

    To achieve a quality neutron radiographic image in a relatively short exposure time, the neutron radiography beam must be of good quality and relatively high neutron flux. Characterization of a neutron radiography beam, such as determination of the image quality and the neutron flux, is vital for producing quality radiographic images and also provides a means to compare the quality of different neutron radiography facilities. This paper provides a characterization of the radial and tangential neutron radiography beamlines at the Tehran research reactor. This work includes determination of the facilities category according to the American Society for Testing and Materials (ASTM) standards, and also uses the gold foils to determine the neutron beam flux. The radial neutron beam is a Category I neutron radiography facility, the highest possible quality level according to the ASTM. The tangential beam is a Category IV neutron radiography facility. Gold foil activation experiments show that the measured neutron flux for radial beamline with length-to-diameter ratio (L/D) =150 is 6.1× 106 n cm-2 s-1 and for tangential beamline with (L/D)=115 is 2.4× 104 n cm-2 s-1.

  17. Tangential Biopsy Thickness versus Lesion Depth in Longitudinal Melanonychia: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Nilton Di Chiacchio

    2012-01-01

    Full Text Available Longitudinal melanonychia can be caused by melanocyte activation (hypermelanosis or proliferation (lentigo, nevus or melanoma. Histopathologic examination is mandatory for suspicious cases of melanomas. Tangential biopsy of the matrix is an elegant technique avoiding nail plate dystrophy, but it was unknown whether the depth of the sample obtained by this method is adequate for histopathologic diagnosis. Twenty-two patients with longitudinal melanonychia striata were submitted to tangential matrix biopsies described by Haneke. The tissue was stained with hematoxylin-eosin and the specimens were measured at 3 distinct points according to the total thickness: largest (A, intermediate (B and narrowest (C then divided into 4 groups according to the histopathologic diagnosis (G1: hypermelanosis; G2: lentigos; G3: nevus; G4: melanoma. The lesions were measured using the same method. The mean specimen/lesion thickness measure values for each group was: G1: 0,59/0,10 mm, G2: 0,67/0,08 mm, G3: 0,52/0,05 mm, G4: 0,58/0,10 mm. The general average thickness for all the specimens/lesions was 0,59/0,08 mm. We concluded that the tangential excision, for longitudinal melanonychia, provides an adequate material for histopathological diagnosis.

  18. A simple and self-consistent geostrophic-force-balance model of the thermohaline circulation with boundary mixing

    Directory of Open Access Journals (Sweden)

    J. Callies

    2012-01-01

    Full Text Available A simple model of the thermohaline circulation (THC is formulated, with the objective to represent explicitly the geostrophic force balance of the basinwide THC. The model comprises advective-diffusive density balances in two meridional-vertical planes located at the eastern and the western walls of a hemispheric sector basin. Boundary mixing constrains vertical motion to lateral boundary layers along these walls. Interior, along-boundary, and zonally integrated meridional flows are in thermal-wind balance. Rossby waves and the absence of interior mixing render isopycnals zonally flat except near the western boundary, constraining meridional flow to the western boundary layer. The model is forced by a prescribed meridional surface density profile.

    This two-plane model reproduces both steady-state density and steady-state THC structures of a primitive-equation model. The solution shows narrow deep sinking at the eastern high latitudes, distributed upwelling at both boundaries, and a western boundary current with poleward surface and equatorward deep flow. The overturning strength has a 2/3-power-law dependence on vertical diffusivity and a 1/3-power-law dependence on the imposed meridional surface density difference. Convective mixing plays an essential role in the two-plane model, ensuring that deep sinking is located at high latitudes. This role of convective mixing is consistent with that in three-dimensional models and marks a sharp contrast with previous two-dimensional models.

    Overall, the two-plane model reproduces crucial features of the THC as simulated in simple-geometry three-dimensional models. At the same time, the model self-consistently makes quantitative a conceptual picture of the three-dimensional THC that hitherto has been expressed either purely qualitatively or not self-consistently.

  19. Late Pleistocene sequence architecture on the geostrophic current-dominated southwest margin of the Ulleung Basin, East Sea

    Science.gov (United States)

    Choi, Dong-Lim; Shin, Dong-Hyeok; Kum, Byung-Cheol; Jang, Seok; Cho, Jin-Hyung; Jou, Hyeong-Tae; Jang, Nam-Do

    2017-11-01

    High-resolution multichannel seismic data were collected to identify depositional sequences on the southwestern shelf of the Ulleung Basin, where a unidirectional ocean current is dominant at water depths exceeding 130 m. Four aggradational stratigraphic sequences with a 100,000-year cycle were recognized since marine isotope stage (MIS) 10. These sequences consist only of lowstand systems tracts (LSTs) and falling-stage systems tracts (FSSTs). Prograding wedge-shaped deposits are present in the LSTs near the shelf break. Oblique progradational clinoforms of forced regressive deposits are present in the FSSTs on the outer continental shelf. Each FSST has non-uniform forced regressional stratal geometries, reflecting that the origins of sediments in each depositional sequence changed when sea level was falling. Slump deposits are characteristically developed in the upper layer of the FSSTs, and this was used as evidence to distinguish the sequence boundaries. The subsidence rates around the shelf break reached as much as 0.6 mm/year since MIS 10, which contributed to the well-preserved depositional sequence. During the Quaternary sea-level change, the water depth in the Korea Strait declined and the intensity of the Tsushima Current flowing near the bottom of the inner continental shelf increased. This resulted in greater erosion of sediments that were delivered to the outer continental shelf, which was the main cause of sediment deposition on the deep, low-angled outer shelf. Therefore, a depositional sequence formation model that consists of only FSSTs and LSTs, excluding highstand systems tracts (HSTs) and transgressive systems tracts (TSTs), best explains the depositional sequence beneath this shelf margin dominated by a geostrophic current.

  20. Rescue of an extending capsulorrhexis by creating a midway tangential anterior capsular flap: a novel technique in 22 eyes.

    Science.gov (United States)

    Mohammadpour, Mehrdad

    2010-06-01

    To show how an extending capsulorrhexis can be rescued by a midway tangential capsular flap in order to achieve an uneventful phacoemulsification. Consecutive case series. Twenty-two eyes of 22 patients with extending capsulorrhexis treated at the Farabi Eye Hospital, Tehran. First, a tangential capsular opening was created on the border of the presumed continuous curvilinear capsulorrhexis just midway between the beginning of the capsulorrhexis and the edge of the extending capsulorrhexis, to make a tangential flap of the anterior capsule. Second, the centre of this new flap was grasped and pulled centripetally until the edges of the new flap joined the edges of the extending flap to complete the capsulorrhexis. The technique was successfully performed in all cases, leading to an uneventful phacoemulsification. Midway tangential capsular flap is a safe and effective technique to rescue an extending capsulorrhexis and leads to an uneventful phacoemulsification.

  1. Managerial and Organizational Assumptions in the CMM's

    DEFF Research Database (Denmark)

    Rose, Jeremy; Aaen, Ivan; Nielsen, Peter Axel

    2008-01-01

    thinking about large production and manufacturing organisations (particularly in America) in the late industrial age. Many of the difficulties reported with CMMI can be attributed basing practice on these assumptions in organisations which have different cultures and management traditions, perhaps...... in different countries operating different economic and social models. Characterizing CMMI in this way opens the door to another question: are there other sets of organisational and management assumptions which would be better suited to other types of organisations operating in other cultural contexts?...

  2. Life Support Baseline Values and Assumptions Document

    Science.gov (United States)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.

    2018-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. This document identifies many specific physical quantities that define life support systems, serving as a general reference for spacecraft life support system technology developers.

  3. Extracurricular Business Planning Competitions: Challenging the Assumptions

    Science.gov (United States)

    Watson, Kayleigh; McGowan, Pauric; Smith, Paul

    2014-01-01

    Business planning competitions [BPCs] are a commonly offered yet under-examined extracurricular activity. Given the extent of sceptical comment about business planning, this paper offers what the authors believe is a much-needed critical discussion of the assumptions that underpin the provision of such competitions. In doing so it is suggested…

  4. Culturally Biased Assumptions in Counseling Psychology

    Science.gov (United States)

    Pedersen, Paul B.

    2003-01-01

    Eight clusters of culturally biased assumptions are identified for further discussion from Leong and Ponterotto's (2003) article. The presence of cultural bias demonstrates that cultural bias is so robust and pervasive that is permeates the profession of counseling psychology, even including those articles that effectively attack cultural bias…

  5. Mexican-American Cultural Assumptions and Implications.

    Science.gov (United States)

    Carranza, E. Lou

    The search for presuppositions of a people's thought is not new. Octavio Paz and Samuel Ramos have both attempted to describe the assumptions underlying the Mexican character. Paz described Mexicans as private, defensive, and stoic, characteristics taken to the extreme in the "pachuco." Ramos, on the other hand, described Mexicans as…

  6. Assumptions of Multiple Regression: Correcting Two Misconceptions

    Science.gov (United States)

    Williams, Matt N.; Gomez Grajales, Carlos Alberto; Kurkiewicz, Dason

    2013-01-01

    In 2002, an article entitled "Four assumptions of multiple regression that researchers should always test" by Osborne and Waters was published in "PARE." This article has gone on to be viewed more than 275,000 times (as of August 2013), and it is one of the first results displayed in a Google search for "regression…

  7. Categorical Judgment Scaling with Ordinal Assumptions.

    Science.gov (United States)

    Hofacker, C F

    1984-01-01

    One of the most common activities of psychologists and other researchers is to construct Likert scales and then proceed to analyze them as if the numbers constituted an equal interval scale. There are several alternatives to this procedure (Thurstone & Chave, 1929; Muthen, 1983) that make normality assumptions but which do not assume that the answer categories as used by subjects constitute an equal interval scale. In this paper a new alternative is proposed that uses additive conjoint measurement. It is assumed that subjects can report their attitudes towards stimuli in the appropriate rank order. Neither within-subject nor between-subject distributional assumptions are made. Nevertheless, interval level stimulus values, as well as response category boundaries, are extracted by the procedure. This approach is applied to three sets of attitude data. In these three cases, the equal interval assumption is clearly wrong. Despite this, arithmetic means seem to closely reflect group attitudes towards the stimuli. In one data set, the normality assumption of Thurstone and Chave (1929) and Muthen (1983) is supported, and in the two others it is supported with reservations.

  8. Critically Challenging Some Assumptions in HRD

    Science.gov (United States)

    O'Donnell, David; McGuire, David; Cross, Christine

    2006-01-01

    This paper sets out to critically challenge five interrelated assumptions prominent in the (human resource development) HRD literature. These relate to: the exploitation of labour in enhancing shareholder value; the view that employees are co-contributors to and co-recipients of HRD benefits; the distinction between HRD and human resource…

  9. Causal Mediation Analysis: Warning! Assumptions Ahead

    Science.gov (United States)

    Keele, Luke

    2015-01-01

    In policy evaluations, interest may focus on why a particular treatment works. One tool for understanding why treatments work is causal mediation analysis. In this essay, I focus on the assumptions needed to estimate mediation effects. I show that there is no "gold standard" method for the identification of causal mediation effects. In…

  10. Shattering world assumptions: A prospective view of the impact of adverse events on world assumptions.

    Science.gov (United States)

    Schuler, Eric R; Boals, Adriel

    2016-05-01

    Shattered Assumptions theory (Janoff-Bulman, 1992) posits that experiencing a traumatic event has the potential to diminish the degree of optimism in the assumptions of the world (assumptive world), which could lead to the development of posttraumatic stress disorder. Prior research assessed the assumptive world with a measure that was recently reported to have poor psychometric properties (Kaler et al., 2008). The current study had 3 aims: (a) to assess the psychometric properties of a recently developed measure of the assumptive world, (b) to retrospectively examine how prior adverse events affected the optimism of the assumptive world, and (c) to measure the impact of an intervening adverse event. An 8-week prospective design with a college sample (N = 882 at Time 1 and N = 511 at Time 2) was used to assess the study objectives. We split adverse events into those that were objectively or subjectively traumatic in nature. The new measure exhibited adequate psychometric properties. The report of a prior objective or subjective trauma at Time 1 was related to a less optimistic assumptive world. Furthermore, participants who experienced an intervening objectively traumatic event evidenced a decrease in optimistic views of the world compared with those who did not experience an intervening adverse event. We found support for Shattered Assumptions theory retrospectively and prospectively using a reliable measure of the assumptive world. We discuss future assessments of the measure of the assumptive world and clinical implications to help rebuild the assumptive world with current therapies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Individualized Selection of Beam Angles and Treatment Isocenter in Tangential Breast Intensity Modulated Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Penninkhof, Joan, E-mail: j.penninkhof@erasmusmc.nl [Department of Radiation Oncology, Erasmus M.C. Cancer Institute, Rotterdam (Netherlands); Spadola, Sara [Department of Radiation Oncology, Erasmus M.C. Cancer Institute, Rotterdam (Netherlands); Department of Physics and Astronomy, Alma Mater Studiorum, University of Bologna, Bologna (Italy); Breedveld, Sebastiaan; Baaijens, Margreet [Department of Radiation Oncology, Erasmus M.C. Cancer Institute, Rotterdam (Netherlands); Lanconelli, Nico [Department of Physics and Astronomy, Alma Mater Studiorum, University of Bologna, Bologna (Italy); Heijmen, Ben [Department of Radiation Oncology, Erasmus M.C. Cancer Institute, Rotterdam (Netherlands)

    2017-06-01

    Purpose and Objective: Propose a novel method for individualized selection of beam angles and treatment isocenter in tangential breast intensity modulated radiation therapy (IMRT). Methods and Materials: For each patient, beam and isocenter selection starts with the fully automatic generation of a large database of IMRT plans (up to 847 in this study); each of these plans belongs to a unique combination of isocenter position, lateral beam angle, and medial beam angle. The imposed hard planning constraint on patient maximum dose may result in plans with unacceptable target dose delivery. Such plans are excluded from further analyses. Owing to differences in beam setup, database plans differ in mean doses to organs at risk (OARs). These mean doses are used to construct 2-dimensional graphs, showing relationships between: (1) contralateral breast dose and ipsilateral lung dose; and (2) contralateral breast dose and heart dose (analyzed only for left-sided). The graphs can be used for selection of the isocenter and beam angles with the optimal, patient-specific tradeoffs between the mean OAR doses. For 30 previously treated patients (15 left-sided and 15 right-sided tumors), graphs were generated considering only the clinically applied isocenter with 121 tangential beam angle pairs. For 20 of the 30 patients, 6 alternative isocenters were also investigated. Results: Computation time for automatic generation of 121 IMRT plans took on average 30 minutes. The generated graphs demonstrated large variations in tradeoffs between conflicting OAR objectives, depending on beam angles and patient anatomy. For patients with isocenter optimization, 847 IMRT plans were considered. Adding isocenter position optimization next to beam angle optimization had a small impact on the final plan quality. Conclusion: A method is proposed for individualized selection of beam angles in tangential breast IMRT. This may be especially important for patients with cardiac risk factors or an

  12. Design of a SQUID Sensor Array Measuring the Tangential Field Components in Magnetocardiogram

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.; Lee, Y. h.; Kwon, H.; Kim, J. M.; Kim, I. S.; Park, Y. K.; Lww, K. W. [Biomagnetism Research Center, Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2004-10-15

    We consider design factors for a SQUID sensor array to construct a 52-channel magnetocardiogram (MCG) system that can be used to measure tangential components of the cardiac magnetic fields. Nowadays, full-size multichannel MCG systems, which cover the whole signal area of a heart, are developed to improve the clinical analysis with high accuracy and to provide patients with comfort in the course of measurement. To design the full-size MCG system, we have to make a compromise between cost and performance. The cost is involved with the number of sensors, the number of the electronics, the size of a cooling dewar, the consumption of refrigerants for maintenance, and etc. The performance is the capability of covering the whole heart volume at once and of localizing current sources with a small error. In this study, we design the cost-effective arrangement of sensors for MCG by considering an adequate sensor interval and the confidence region of a tolerable localization error, which covers the heart. In order to fit the detector array on the cylindrical dewar economically, we removed the detectors that were located at the corners of the array square. Through simulations using the confidence region method, we verified that our design of the detector array was good enough to obtain whole information from the heart at a time. A result of the simulation also suggested that tangential-component MCG measurement could localize deeper current dipoles than normal-component MCG measurement with the same confidence volume; therefore, we conclude that measurement of the tangential component is more suitable to an MCG system than measurement of the normal component.

  13. Measurement of seismometer orientation using the tangential P-wave receiver function based on harmonic decomposition

    Science.gov (United States)

    Lim, Hobin; Kim, YoungHee; Song, Teh-Ru Alex; Shen, Xuzhang

    2018-03-01

    Accurate determination of the seismometer orientation is a prerequisite for seismic studies including, but not limited to seismic anisotropy. While borehole seismometers on land produce seismic waveform data somewhat free of human-induced noise, they might have a drawback of an uncertain orientation. This study calculates a harmonic decomposition of teleseismic receiver functions from the P and PP phases and determines the orientation of a seismometer by minimizing a constant term in a harmonic expansion of tangential receiver functions in backazimuth near and at 0 s. This method normalizes the effect of seismic sources and determines the orientation of a seismometer without having to assume for an isotropic medium. Compared to the method of minimizing the amplitudes of a mean of the tangential receiver functions near and at 0 s, the method yields more accurate orientations in cases where the backazimuthal coverage of earthquake sources (even in the case of ocean bottom seismometers) is uneven and incomplete. We apply this method to data from the Korean seismic network (52 broad-band velocity seismometers, 30 of which are borehole sensors) to estimate the sensor orientation in the period of 2005-2016. We also track temporal changes in the sensor orientation through the change in the polarity and the amplitude of the tangential receiver function. Six borehole stations are confirmed to experience a significant orientation change (10°-180°) over the period of 10 yr. We demonstrate the usefulness of our method by estimating the orientation of ocean bottom sensors, which are known to have high noise level during the relatively short deployment period.

  14. Computational Modelling of a Tangentially Fired Boiler With Deposit Formation Phenomena

    Directory of Open Access Journals (Sweden)

    Modliński Norbert J.

    2014-09-01

    Full Text Available Any complete CFD model of pulverised coal-fired boiler needs to consider ash deposition phenomena. Wall boundary conditions (temperature and emissivity should be temporally corrected to account for the effects of deposit growth on the combustion conditions. At present voluminous publications concerning ash related problems are available. The current paper presents development of an engineering tool integrating deposit formation models with the CFD code. It was then applied to two tangentially-fired boilers. The developed numerical tool was validated by comparing it with boiler evaporator power variation based on the on-line diagnostic system with the results from the full CFD simulation.

  15. Large Object Irradiation Facility In The Tangential Channel Of The JSI TRIGA Reactor

    CERN Document Server

    Radulovic, Vladimir; Kaiba, Tanja; Kavsek, Darko; Cindro, Vladimir; Mikuz, Marko; Snoj, Luka

    2017-01-01

    This paper presents the design and installation of a new irradiation device in the Tangential Channel of the JSI TRIGA reactor in Ljubljana, Slovenia. The purpose of the device is to enable on-line irradiation testing of electronic components considerably larger in size (of lateral dimensions of at least 12 cm) than currently possible in the irradiation channels located in the reactor core, in a relatively high neutron flux (exceeding 10^12 n cm^-2 s^-1) and to provide adequate neutron and gamma radiation shielding.

  16. Slideline verification for multilayer pressure vessel and piping analysis including tangential motion

    International Nuclear Information System (INIS)

    Van Gulick, L.A.

    1984-01-01

    Nonlinear finite element method (FEM) computer codes with slideline algorithm implementations should be useful for the analysis of prestressed multilayer pressure vessels and piping. This paper presents closed form solutions including the effects of tangential motion useful for verifying slideline implementations for this purpose. The solutions describe stresses and displacements of a long internally pressurized elastic-plastic cylinder initially separated from an elastic outer cylinder by a uniform gap. Comparison of closed form and FEM results evaluates the usefulness of the closed form solution and the validity of the sideline implementation used

  17. Tangential flow of FENE-P viscoelastic fluid within concentric rotating cylinders

    International Nuclear Information System (INIS)

    Shirazi, L.; Mirzazadeh, M.; Rashidi, F.

    2006-01-01

    An analytical solution is presented for the steady state and purely tangential flow of a nonlinear viscoelastic fluid obeying the constitutive FENE-P model in a concentric annulus with inner cylinder rotation. The effect of fluid elasticity (Weissenberg number), the extensibility parameter of the model ( L 2) and aspect ratio on the velocity profile and production of friction factor and Reynolds number (f Re) are investigated. The results show the strong effect of viscoelastic parameters on the velocity profile. The results also show that .f Re decreases with increasing fluid elasticity and radius ratio

  18. Use of a tangential filtration unit for processing liquid waste from nuclear laundries

    International Nuclear Information System (INIS)

    Augustin, X.; Buzonniere, A. de; Barnier, H.

    1993-01-01

    Nuclear laundries produce large quantities of weakly contaminated effluents charged with insoluble and soluble products. In collaboration with CEA, TECHNICATOME has developed an ultrafiltration process for liquid waste from nuclear laundries, associated with prior in-solubilization of the radiochemical activity. This process 'seeded ultrafiltration' is based on the use of decloggable mineral filter media and combines very high separation efficiency with long membrane life. The efficiency of the tangential filtration unit which has been processing effluents from the Cadarache Nuclear Research Center (CEA-France) nuclear laundry since mid-1988, has been confirmed on several sites

  19. The 'revealed preferences' theory: Assumptions and conjectures

    International Nuclear Information System (INIS)

    Green, C.H.

    1983-01-01

    Being kind of intuitive psychology the 'Revealed-Preferences'- theory based approaches towards determining the acceptable risks are a useful method for the generation of hypotheses. In view of the fact that reliability engineering develops faster than methods for the determination of reliability aims the Revealed-Preferences approach is a necessary preliminary help. Some of the assumptions on which the 'Revealed-Preferences' theory is based will be identified and analysed and afterwards compared with experimentally obtained results. (orig./DG) [de

  20. How to Handle Assumptions in Synthesis

    Directory of Open Access Journals (Sweden)

    Roderick Bloem

    2014-07-01

    Full Text Available The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.

  1. Towards New Probabilistic Assumptions in Business Intelligence

    OpenAIRE

    Schumann Andrew; Szelc Andrzej

    2015-01-01

    One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot ...

  2. Estimation of Power Consumption in the Circular Sawing of Stone Based on Tangential Force Distribution

    Science.gov (United States)

    Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng

    2018-04-01

    Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.

  3. Quasistatic Seismic Damage Indicators for RC Structures from Dissipating Energies in Tangential Subspaces

    Directory of Open Access Journals (Sweden)

    Wilfried B. Krätzig

    2014-01-01

    Full Text Available This paper applies recent research on structural damage description to earthquake-resistant design concepts. Based on the primary design aim of life safety, this work adopts the necessity of additional protection aims for property, installation, and equipment. This requires the definition of damage indicators, which are able to quantify the arising structural damage. As in present design, it applies nonlinear quasistatic (pushover concepts due to code provisions as simplified dynamic design tools. Substituting so nonlinear time-history analyses, seismic low-cycle fatigue of RC structures is approximated in similar manner. The treatment will be embedded into a finite element environment, and the tangential stiffness matrix KT in tangential subspaces then is identified as the most general entry for structural damage information. Its spectra of eigenvalues λi or natural frequencies ωi of the structure serve to derive damage indicators Di, applicable to quasistatic evaluation of seismic damage. Because det KT=0 denotes structural failure, such damage indicators range from virgin situation Di=0 to failure Di=1 and thus correspond with Fema proposals on performance-based seismic design. Finally, the developed concept is checked by reanalyses of two experimentally investigated RC frames.

  4. Tangential Bicortical Locked Fixation Improves Stability in Vancouver B1 Periprosthetic Femur Fractures: A Biomechanical Study.

    Science.gov (United States)

    Lewis, Gregory S; Caroom, Cyrus T; Wee, Hwabok; Jurgensmeier, Darin; Rothermel, Shane D; Bramer, Michelle A; Reid, John Spence

    2015-10-01

    The biomechanical difficulty in fixation of a Vancouver B1 periprosthetic fracture is purchase of the proximal femoral segment in the presence of the hip stem. Several newer technologies provide the ability to place bicortical locking screws tangential to the hip stem with much longer lengths of screw purchase compared with unicortical screws. This biomechanical study compares the stability of 2 of these newer constructs to previous methods. Thirty composite synthetic femurs were prepared with cemented hip stems. The distal femur segment was osteotomized, and plates were fixed proximally with either (1) cerclage cables, (2) locked unicortical screws, (3) a composite of locked screws and cables, or tangentially directed bicortical locking screws using either (4) a stainless steel locking compression plate system with a Locking Attachment Plate (Synthes) or (5) a titanium alloy Non-Contact Bridging system (Zimmer). Specimens were tested to failure in either axial or torsional quasistatic loading modes (n = 3) after 20 moderate load preconditioning cycles. Stiffness, maximum force, and failure mechanism were determined. Bicortical constructs resisted higher (by an average of at least 27%) maximum forces than the other 3 constructs in torsional loading (P steel construct in axial loading. Proximal fixation stability is likely improved with the use of bicortical locking screws as compared with traditional unicortical screws and cable techniques. In this study with a limited sample size, we found the addition of cerclage cables to unicortical screws may not offer much improvement in biomechanical stability of unstable B1 fractures.

  5. A computer program for performance prediction of tripropellant rocket engines with tangential slot injection

    Science.gov (United States)

    Dang, Anthony; Nickerson, Gary R.

    1987-01-01

    For the development of a Heavy Lift Launch Vehicle (HLLV) several engines with different operating cycles and using LOX/Hydrocarbon propellants are presently being examined. Some concepts utilize hydrogen for thrust chamber wall cooling followed by a gas generator turbine drive cycle with subsequent dumping of H2/O2 combustion products into the nozzle downstream of the throat. In the Space Transportation Booster Engine (STBE) selection process the specific impulse will be one of the optimization criteria; however, the current performance prediction programs do not have the capability to include a third propellant in this process, nor to account for the effect of dumping the gas-generator product tangentially inside the nozzle. The purpose is to describe a computer program for accurately predicting the performance of such an engine. The code consists of two modules; one for the inviscid performance, and the other for the viscous loss. For the first module, the two-dimensional kinetics program (TDK) was modified to account for tripropellant chemistry, and for the effect of tangential slot injection. For the viscous loss, the Mass Addition Boundary Layer program (MABL) was modified to include the effects of the boundary layer-shear layer interaction, and tripropellant chemistry. Calculations were made for a real engine and compared with available data.

  6. Interaction of the interplanetary shock and tangential discontinuity in the solar wind

    Science.gov (United States)

    Goncharov, Oleksandr; Koval, Andriy; Safrankova, Jana; Nemecek, Zdenek; Prech, Lubomir; Szabo, Adam; Zastenker, Georgy N.

    2017-04-01

    Collisionless shocks play a significant role in the solar wind interaction with the Earth. Fast forward interplanetary (IP) shocks driven by coronal mass ejections or by interaction of fast and slow solar wind streams can be encountered in the interplanetary space, while the bow shock is a standing fast reverse shock formed by the interaction of the supersonic solar wind with Earth's magnetic field. Both types of shocks are responsible for a transformation of a part of the energy of the directed solar wind motion to plasma heating and to acceleration of reflected particles to high energies. It is well known that the interaction of tangential discontinuities with the bow shock can create hot flow anomalies but interactions between IP shocks and tangential discontinuities in the solar wind are studied to a lesser extent due to lack of observations. A fortunate position of many spacecraft (Wind, ACE, DSCOVR, THEMIS, Spektr-R) on June 22, 2015 allows us detailed observations of an IP shock modification due to this interaction. We present an analysis of the event supported with MHD modeling that reveals basic features of the observed IP shock ramp splitting. However, a good matching of modeling and observations was found for DSCOVR and Spektr-R located above the ecliptic plane, whereas a timing of observations below this plane demonstrates problems of modeling of highly inclined discontinuities.

  7. Effects of Tangential Edge Constraints on the Postbuckling Behavior of Flat and Curved Panels Subjected to Thermal and Mechanical Loads

    Science.gov (United States)

    Lin, W.; Librescu, L.; Nemeth, M. P.; Starnes, J. H. , Jr.

    1994-01-01

    A parametric study of the effects of tangential edge constraints on the postbuckling response of flat and shallow curved panels subjected to thermal and mechanical loads is presented. The mechanical loads investigated are uniform compressive edge loads and transverse lateral pressure. The temperature fields considered are associated with spatially nonuniform heating over the panels, and a linear through-the-thickness temperature gradient. The structural model is based on a higher-order transverse-shear-deformation theory of shallow shells that incorporates the effects of geometric nonlinearities, initial geometric imperfections, and tangential edge motion constraints. Results are presented for three-layer sandwich panels made from transversely isotropic materials. Simply supported panels are considered in which the tangential motion of the unloaded edges is either unrestrained, partially restrained, or fully restrained. These results focus on the effects of the tangential edge restraint on the postbuckling response. The results of this study indicate that tangentially restraining the edges of a curved panel can make the panel insensitive to initial geometric imperfections in some cases.

  8. Circulation and horizontal fluxes in the northern Adriatic Sea in the period June 1999-July 2002. Part I: geostrophic circulation and current measurement.

    Science.gov (United States)

    Grilli, Federica; Paschini, Elio; Precali, Robert; Russo, Aniello; Supić, Nastjenjka

    2005-12-15

    The dramatic increase in the occurrence of massive mucilage events in the northern Adriatic (NA) since their recent conspicuous reappearance in the late 1980s prompted a study of circulation and horizontal fluxes. Three transects with equidistant stations (10 km) were thus monitored monthly between June 1999 and July 2002. The geostrophic method was used to compute currents across the three transects from the CTD data, and dynamic heights provided a picture of the horizontal surface circulation. Currentmeter data records were used to adjust the reference surface and to validate the results for the southernmost and deeper (up to 70 m) transect (Senigallia-Susak Island). Geostrophic currents allowed estimation of monthly water fluxes across the transect. Different circulation regimes in the NA were observed, which may have affected mucilage events. When mucilage was absent (1999) or reduced (2001) in the western sector, the Western Adriatic Current (WAC, carrying water out of the NA) was found to be active, whilst the WAC was very weak or reversed when massive mucilage events occurred (2000 and 2002). Opposite behaviour has been observed for the Istrian Coastal Counter-Current (ICCC, retaining freshwater water in the NA) which was more intense during or after massive mucilage events and did not appear when mucilage was absent. Both WAC weakening and ICCC strengthening indicate a longer residence time of riverine waters in the NA, which favours mucilage development. Conclusively, WAC and ICCC result as key elements in controlling massive mucilage phenomena in the NA.

  9. Use of the quasi-geostrophic dynamical framework to reconstruct the 3-D ocean state in a high-resolution realistic simulation of North Atlantic.

    Science.gov (United States)

    Fresnay, Simon; Ponte, Aurélien

    2017-04-01

    The quasi-geostrophic (QG) framework has been, is and will be still for years to come a cornerstone method linking observations with estimates of the ocean circulation and state. We have used here the QG framework to reconstruct dynamical variables of the 3-D ocean in a state-of-the-art high-resolution (1/60 deg, 300 vertical levels) numerical simulation of the North Atlantic (NATL60). The work was carried out in 3 boxes of the simulation: Gulf Stream, Azores and Reykjaness Ridge. In a first part, general diagnostics describing the eddying dynamics have been performed and show that the QG scaling verifies in general, at depths distant from mixed layer and bathymetric gradients. Correlations with surface observables variables (e.g. temperature, sea level) were computed and estimates of quasi-geostrophic potential vorticity (QGPV) were reconstructed by the means of regression laws. It is shown that that reconstruction of QGPV exhibits valuable skill for a restricted scale range, mainly using sea level as the variable of regression. Additional discussion is given, based on the flow balanced with QGPV. This work is part of the DIMUP project, aiming to improve our ability to operationnaly estimate the ocean state.

  10. New Assumptions to Guide SETI Research

    Science.gov (United States)

    Colombano, S. P.

    2018-01-01

    The recent Kepler discoveries of Earth-like planets offer the opportunity to focus our attention on detecting signs of life and technology in specific planetary systems, but I feel we need to become more flexible in our assumptions. The reason is that, while it is still reasonable and conservative to assume that life is most likely to have originated in conditions similar to ours, the vast time differences in potential evolutions render the likelihood of "matching" technologies very slim. In light of these challenges I propose a more "aggressive"� approach to future SETI exploration in directions that until now have received little consideration.

  11. Limiting assumptions in molecular modeling: electrostatics.

    Science.gov (United States)

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  12. Assumptions for the Annual Energy Outlook 1992

    International Nuclear Information System (INIS)

    1992-01-01

    This report serves a auxiliary document to the Energy Information Administration (EIA) publication Annual Energy Outlook 1992 (AEO) (DOE/EIA-0383(92)), released in January 1992. The AEO forecasts were developed for five alternative cases and consist of energy supply, consumption, and price projections by major fuel and end-use sector, which are published at a national level of aggregation. The purpose of this report is to present important quantitative assumptions, including world oil prices and macroeconomic growth, underlying the AEO forecasts. The report has been prepared in response to external requests, as well as analyst requirements for background information on the AEO and studies based on the AEO forecasts

  13. Tangential discontinuities in the solar wind: Correlated field and velocity changes in the Kelvin-Helmholtz Instability

    International Nuclear Information System (INIS)

    Neugebauer, M.; Alexander, C.J.; Schwenn, R.; Richter, A.K.

    1986-01-01

    Three-dimensional Helios plasma and field data are used to investigate the relative changes in direction of the velocity and magnetic field vectors across tangential discontinuities, (TDs) in the solar wind at solar distances of 0.29--0.50 AU. It is found for tangential discontinuities with both Δv and ΔB/B large that Δv and ΔB are closely aligned with each other, in agreement with the unexpected results of previous studies of tangential discontinuities observed at 1 AU and beyond. It is shown that this effect probably results from the destruction by the Kelvin-Helmholtz instability of TDs for which Δv and ΔB are not aligned. The observed decrease in the number of interplanetary discontinuities with increasing solar distance may be associated with the growth of the Kelvin-Helmholtz instability with decreasing Alfven speed

  14. Explorations in statistics: the assumption of normality.

    Science.gov (United States)

    Curran-Everett, Douglas

    2017-09-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This twelfth installment of Explorations in Statistics explores the assumption of normality, an assumption essential to the meaningful interpretation of a t test. Although the data themselves can be consistent with a normal distribution, they need not be. Instead, it is the theoretical distribution of the sample mean or the theoretical distribution of the difference between sample means that must be roughly normal. The most versatile approach to assess normality is to bootstrap the sample mean, the difference between sample means, or t itself. We can then assess whether the distributions of these bootstrap statistics are consistent with a normal distribution by studying their normal quantile plots. If we suspect that an inference we make from a t test may not be justified-if we suspect that the theoretical distribution of the sample mean or the theoretical distribution of the difference between sample means is not normal-then we can use a permutation method to analyze our data. Copyright © 2017 the American Physiological Society.

  15. Forecasting Renewable Energy Consumption under Zero Assumptions

    Directory of Open Access Journals (Sweden)

    Jie Ma

    2018-02-01

    Full Text Available Renewable energy, as an environmentally friendly and sustainable source of energy, is key to realizing the nationally determined contributions of the United States (US to the December 2015 Paris agreement. Policymakers in the US rely on energy forecasts to draft and implement cost-minimizing, efficient and realistic renewable and sustainable energy policies but the inaccuracies in past projections are considerably high. The inaccuracies and inconsistencies in forecasts are due to the numerous factors considered, massive assumptions and modeling flaws in the underlying model. Here, we propose and apply a machine learning forecasting algorithm devoid of massive independent variables and assumptions to model and forecast renewable energy consumption (REC in the US. We employ the forecasting technique to make projections on REC from biomass (REC-BMs and hydroelectric (HE-EC sources for the 2009–2016 period. We find that, relative to reference case projections in Energy Information Administration’s Annual Energy Outlook 2008, projections based on our proposed technique present an enormous improvement up to ~138.26-fold on REC-BMs and ~24.67-fold on HE-EC; and that applying our technique saves the US ~2692.62PJ petajoules(PJ on HE-EC and ~9695.09PJ on REC-BMs for the 8-year forecast period. The achieved high-accuracy is also replicable to other regions.

  16. Kinetic equilibrium for an asymmetric tangential layer with rotation of the magnetic field

    Science.gov (United States)

    Belmont, Gérard; Dorville, Nicolas; Aunai, Nicolas; Rezeau, Laurence

    2015-04-01

    Finding kinetic equilibria for tangential current layers is a key issue for modeling plasma phenomena such as magnetic reconnection instabilities, for which theoretical and numerical studies have to start from steady-state current layers. Until 2012, all theoretical models -starting with the most famous "Harris" one- relied on distribution functions built as mono-valued functions of the trajectories invariants. For a coplanar anti-symmetric magnetic field and in absence of electric field, these models were only able to model symmetric variations of the plasma, so precluding any modeling of "magnetopause-like'' layers, which separate two plasmas of different densities and temperatures. Recently, the "BAS" model was presented (Belmont et al., 2012), where multi-valued functions were taken into account. This new tool is made necessary each time the magnetic field reversal occurs on scales larger than the particle Larmor radii, and therefore guaranties a logical transition with the MHD modeling of large scales. The BAS model so provides a new asymmetric equilibrium. It has been validated in a hybrid simulation by Aunai et al (2013), and more recently in a fully kinetic simulation as well. For this original equilibrium to be computed, the magnetic field had to stay coplanar inside the layer. We present here an important generalization, where the magnetic field rotates inside the layer (although restricted to a 180° rotation hitherto). The tangential layers so obtained are thus closer to those encountered at the real magnetopause. This will be necessary, in the future, for comparing directly the theoretical profiles with the experimental ones for the various physical parameters. As it was done previously, the equilibrium is presently tested with a hybrid simulation. Belmont, G.; Aunai, N.; Smets, R., Kinetic equilibrium for an asymmetric tangential layer, Physics of Plasmas, Volume 19, Issue 2, pp. 022108-022118-10, 2012 Aunai, N.; Belmont, G.; Smets, R., First

  17. On Intermittent Turbulence Heating of the Solar Wind: Differences between Tangential and Rotational Discontinuities

    Science.gov (United States)

    Wang, Xin; Tu, Chuanyi; He, Jiansen; Marsch, Eckart; Wang, Linghua

    2013-08-01

    The intermittent structures in solar wind turbulence, studied by using measurements from the WIND spacecraft, are identified as being mostly rotational discontinuities (RDs) and rarely tangential discontinuities (TDs) based on the technique described by Smith. Only TD-associated current sheets (TCSs) are found to be accompanied with strong local heating of the solar wind plasma. Statistical results show that the TCSs have a distinct tendency to be associated with local enhancements of the proton temperature, density, and plasma beta, and a local decrease of magnetic field magnitude. Conversely, for RDs, our statistical results do not reveal convincing heating effects. These results confirm the notion that dissipation of solar wind turbulence can take place in intermittent or locally isolated small-scale regions which correspond to TCSs. The possibility of heating associated with RDs is discussed.

  18. Lung and heart dose volume analyses with CT simulator in tangential field irradiation of breast cancer

    International Nuclear Information System (INIS)

    Das, Indra J.; Cheng, Elizabeth C.; Fowble, Barbara

    1997-01-01

    Objective: Radiation pneumonitis and cardiac effects are directly related to the irradiated lung and heart volumes in the treatment fields. The central lung distance (CLD) from a tangential breast radiograph is shown to be a significant indicator of ipsilateral irradiated lung volume based on empirically derived functions which accuracy depends on the actual measured volume in treatment position. A simple and accurate linear relationship with CLD and retrospective analysis of the pattern of dose volume of lung and heart is presented with actual volume data from a CT simulator in the treatment of breast cancer. Materials and Methods: The heart and lung volumes in the tangential treatment fields were analyzed in 45 consecutive (22 left and 23 right breast) patients referred for CT simulation of the cone down treatment. All patients in this study were immobilized and placed on an inclined breast board in actual treatment setup. Both arms were stretched over head uniformly to avoid collision with the scanner aperture. Radiopaque marks were placed on the medial and lateral borders of the tangential fields. All patients were scanned in spiral mode with slice width and thickness of 3 mm each, respectively. The lung and heart structures as well as irradiated areas were delineated on each slice and respective volumes were accurately measured. The treatment beam parameters were recorded and the digitally reconstructed radiographs (DRRs) were generated for the CLD and analysis. Results: Table 1 shows the volume statistics of patients in this study. There is a large variation in the lung and heart volumes among patients. Due to differences in the shape of right and left lungs the percent irradiated volume (PIV) are different. The PIV data have shown to correlate with CLD with 2nd and 3rd degree polynomials; however, in this study a simple straight line regression is used to provide better confidence than the higher order polynomial. The regression lines for the left and right

  19. Smoluchowski thermostat: a realistic introduction of the tangential momentum accommodation coefficient.

    Science.gov (United States)

    Verbeek, Martijn G

    2010-04-01

    This work presents a simulation technique that can be used to compute the thermal interaction between a gas and a cylindrically shaped wall. The method is computationally simple and is based on the Maxwell-Smoluchowski thermal wall scenario often used for the slit pore geometry. A geometric argument is used to find the corresponding thermalization mechanism for the cylindrical confinement. The algorithm serves as a thermostat, which enables one to perform constant-temperature simulations. By means of simple numerical simulations, Smoluchowski's expression for self-diffusivity D s is then recovered in reduced units. The tangential momentum accommodation coefficient is interpreted as a coupling constant for the thermostat similar to the one used for the ordinary Andersen thermostat but applied locally onto the boundary crossing particles.

  20. Improved confinement induced by tangential CT injection in STOR-M

    International Nuclear Information System (INIS)

    Xiao, C.; Ding, W.X.; McColl, D.R.; White, D.; Hirose, A.

    2001-01-01

    H-mode like discharges have been induced by tangential compact torus (CT) injection into the STOR-M tokamak. The improved confinement phase is characterized by an increase in the electron density, significant reduction in the H α radiation level, steepening of the edge density profile, suppression of m=2 Mirnov oscillations, and suppression of the floating potential fluctuations. These features are similar to those associated with the H-modes induced with edge turbulent heating or by electrode/limiter biasing in STOR-M. In contrast to the edge turbulent heating induced H-mode in STOR-M, the floating potential at the plasma edge and SOL increases during the CT injection induced improved confinement phase. Interaction between CT and edge tokamak plasma may be responsible for triggering the H-mode. (author)

  1. CFD analysis of temperature imbalance in superheater/reheater region of tangentially coal-fired boiler

    Science.gov (United States)

    Zainudin, A. F.; Hasini, H.; Fadhil, S. S. A.

    2017-10-01

    This paper presents a CFD analysis of the flow, velocity and temperature distribution in a 700 MW tangentially coal-fired boiler operating in Malaysia. The main objective of the analysis is to gain insights on the occurrences in the boiler so as to understand the inherent steam temperature imbalance problem. The results show that the root cause of the problem comes from the residual swirl in the horizontal pass. The deflection of the residual swirl due to the sudden reduction and expansion of the flow cross-sectional area causes velocity deviation between the left and right side of the boiler. This consequently results in flue gas temperature imbalance which has often caused tube leaks in the superheater/reheater region. Therefore, eliminating the residual swirl or restraining it from being diverted might help to alleviate the problem.

  2. Optimizing of verification photographs by using the so-called tangential field technique

    International Nuclear Information System (INIS)

    Proske, H.; Merte, H.; Kratz, H.

    1991-01-01

    When irradiating under high voltage condition, verification photographs prove to be difficult to take if the Gantry position is not aligned to 0deg respectively 180deg, since the patient is being irradiated diagonally. Under these conditions it is extremely difficult to align the X-ray-cartridge vertically to the central beam of the therapeutic radiation. This results in, amongst others, misprojections, so that definite dimensions of portrayed organ structures become practical impossible to determine. This paper describes how we have solved these problems on our high voltage units (tele-gamma cobalt unit and linear-accelerator). By using simple accessories, determination of dimensions of organ structures, as shown on the verification photographs, are made possible. We illustrate our method by using the so-called tangential fields technique when irradiating mamma carcinoma. (orig.) [de

  3. Structural instability of carbon nanotubes embedded in viscoelastic medium and subjected to distributed tangential load

    International Nuclear Information System (INIS)

    Kazemilari, Mohammad Ali; Ghavanloo, Esmaeal; Fazelzadeh, S. Ahmad

    2013-01-01

    In this paper, the nonlocal Euler-Bernoulli beam model is used to predict the static and dynamic structural instability of carbon nanotubes (CNTs) subjected to a distributed tangential compressive load. The CNT is considered to be embedded in a Kelvin-Voigt viscoelastic medium. Equation of motion and boundary conditions are obtained using the extended Hamilton's principle and the extended Galerkin's method is applied in order to transform the resulting equations into a general eigenvalue problem. The derived equations are validated by comparing the results achieved from the new derivations with existing solutions in literature. Effects of several experimentally interesting boundary conditions are considered on the stability characteristics of the CNT. Moreover, the influences of small scale parameter and material properties of the surrounding viscoelastic medium on the stability boundaries are examined.

  4. A tangentially viewing VUV TV system for the DIII-D divertor

    International Nuclear Information System (INIS)

    Nilson, D.G.; Ellis, R.; Fenstermacher, M.E.; Brewis, G.; Jalufka, N.

    1998-07-01

    A video camera system capable of imaging VUV emission in the 120--160 nm wavelength range, from the entire divertor region in the DIII-D tokamak, was designed. The new system has a tangential view of the divertor similar to an existing tangential camera system which has produced two dimensional maps of visible line emission (400--800 nm) from deuterium and carbon in the divertor region. However, the overwhelming fraction of the power radiated by these elements is emitted by resonance transitions in the ultraviolet, namely the C IV line at 155.0 nm and Ly-α line at 121.6 nm. To image the ultraviolet light with an angular view including the inner wall and outer bias ring in DIII-D, a 6-element optical system (f/8.9) was designed using a combination of reflective and refractive optics. This system will provide a spatial resolution of 1.2 cm in the object plane. An intermediate UV image formed in a secondary vacuum is converted to the visible by means of a phosphor plate and detected with a conventional CID camera (30 ms framing rate). A single MgF 2 lens serves as the vacuum interface between the primary and secondary vacuums; a second lens must be inserted in the secondary vacuum to correct the focus at 155 nm. Using the same tomographic inversion method employed for the visible TV, they reconstruct the poloidal distribution of the UV divertor light. The grain size of the phosphor plate and the optical system aberrations limit the best focus spot size to 60 microm at the CID plane. The optical system is designed to withstand 350 C vessel bakeout, 2 T magnetic fields, and disruption-induced accelerations of the vessel

  5. Challenging the assumptions for thermal sensation scales

    DEFF Research Database (Denmark)

    Schweiker, Marcel; Fuchs, Xaver; Becker, Susanne

    2016-01-01

    Scales are widely used to assess the personal experience of thermal conditions in built environments. Most commonly, thermal sensation is assessed, mainly to determine whether a particular thermal condition is comfortable for individuals. A seven-point thermal sensation scale has been used...... extensively, which is suitable for describing a one-dimensional relationship between physical parameters of indoor environments and subjective thermal sensation. However, human thermal comfort is not merely a physiological but also a psychological phenomenon. Thus, it should be investigated how scales for its...... assessment could benefit from a multidimensional conceptualization. The common assumptions related to the usage of thermal sensation scales are challenged, empirically supported by two analyses. These analyses show that the relationship between temperature and subjective thermal sensation is non...

  6. STUDIES OF NGC 6720 WITH CALIBRATED HST/WFC3 EMISSION-LINE FILTER IMAGES. III. TANGENTIAL MOTIONS USING ASTRODRIZZLE IMAGES

    Energy Technology Data Exchange (ETDEWEB)

    O' Dell, C. R. [Department of Physics and Astronomy, Vanderbilt University, Box 1807-B, Nashville, TN 37235 (United States); Ferland, G. J. [Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506 (United States); Henney, W. J. [Centro de Radioastronomia y Astrofisica, Universidad Nacional Autonoma de Mexico, Apartado Postal 3-72, 58090 Morelia, Michaoacan (Mexico); Peimbert, M., E-mail: cr.odell@vanderbilt.edu [Instituto de Astronomia, Universidad Nacional Autonoma de Mexico, Apdo, Postal 70-264, 04510 Mexico D.F. (Mexico)

    2013-06-01

    We have been able to compare with astrometric precision AstroDrizzle processed images of NGC 6720 (the Ring Nebula) made using two cameras on the Hubble Space Telescope. The time difference of the observations was 12.925 yr. This large time base allowed the determination of tangential velocities of features within this classic planetary nebula. Individual features were measured in [N II] images as were the dark knots seen in silhouette against background nebular [O III] emission. An image magnification and matching technique was also used to test the accuracy of the usual assumption of homologous expansion. We found that homologous expansion does apply, but the rate of expansion is greater along the major axis of the nebula, which is intrinsically larger than the minor axis. We find that the dark knots expand more slowly than the nebular gas, that the distance to the nebula is 720 pc {+-}30%, and that the dynamic age of the Ring Nebula is about 4000 yr. The dynamic age is in agreement with the position of the central star on theoretical curves for stars collapsing from the peak of the asymptotic giant branch to being white dwarfs.

  7. Transsexual parenthood and new role assumptions.

    Science.gov (United States)

    Faccio, Elena; Bordin, Elena; Cipolletta, Sabrina

    2013-01-01

    This study explores the parental role of transsexuals and compares this to common assumptions about transsexuality and parentage. We conducted semi-structured interviews with 14 male-to-female transsexuals and 14 men, half parents and half non-parents, in order to explore four thematic areas: self-representation of the parental role, the description of the transsexual as a parent, the common representations of transsexuals as a parent, and male and female parental stereotypes. We conducted thematic and lexical analyses of the interviews using Taltac2 software. The results indicate that social representations of transsexuality and parenthood have a strong influence on processes of self-representation. Transsexual parents accurately understood conventional male and female parental prototypes and saw themselves as competent, responsible parents. They constructed their role based on affection toward the child rather than on the complementary role of their wives. In contrast, men's descriptions of transsexual parental roles were simpler and the descriptions of their parental role coincided with their personal experiences. These results suggest that the transsexual journey toward parenthood involves a high degree of re-adjustment, because their parental role does not coincide with a conventional one.

  8. Towards New Probabilistic Assumptions in Business Intelligence

    Directory of Open Access Journals (Sweden)

    Schumann Andrew

    2015-01-01

    Full Text Available One of the main assumptions of mathematical tools in science is represented by the idea of measurability and additivity of reality. For discovering the physical universe additive measures such as mass, force, energy, temperature, etc. are used. Economics and conventional business intelligence try to continue this empiricist tradition and in statistical and econometric tools they appeal only to the measurable aspects of reality. However, a lot of important variables of economic systems cannot be observable and additive in principle. These variables can be called symbolic values or symbolic meanings and studied within symbolic interactionism, the theory developed since George Herbert Mead and Herbert Blumer. In statistical and econometric tools of business intelligence we accept only phenomena with causal connections measured by additive measures. In the paper we show that in the social world we deal with symbolic interactions which can be studied by non-additive labels (symbolic meanings or symbolic values. For accepting the variety of such phenomena we should avoid additivity of basic labels and construct a new probabilistic method in business intelligence based on non-Archimedean probabilities.

  9. Assumptions for the Annual Energy Outlook 1993

    International Nuclear Information System (INIS)

    1993-01-01

    This report is an auxiliary document to the Annual Energy Outlook 1993 (AEO) (DOE/EIA-0383(93)). It presents a detailed discussion of the assumptions underlying the forecasts in the AEO. The energy modeling system is an economic equilibrium system, with component demand modules representing end-use energy consumption by major end-use sector. Another set of modules represents petroleum, natural gas, coal, and electricity supply patterns and pricing. A separate module generates annual forecasts of important macroeconomic and industrial output variables. Interactions among these components of energy markets generate projections of prices and quantities for which energy supply equals energy demand. This equilibrium modeling system is referred to as the Intermediate Future Forecasting System (IFFS). The supply models in IFFS for oil, coal, natural gas, and electricity determine supply and price for each fuel depending upon consumption levels, while the demand models determine consumption depending upon end-use price. IFFS solves for market equilibrium for each fuel by balancing supply and demand to produce an energy balance in each forecast year

  10. Dosimetric comparison of treatment planning systems in irradiation of breast with tangential fields

    International Nuclear Information System (INIS)

    Cheng, C.-W.; Das, Indra J.; Tang, Walter; Chang Sha; Tsai, J.-S.; Ceberg, Crister; Gaspie, Barbara de; Singh, Rajinder; Fein, Douglas A.; Fowble, Barbara

    1997-01-01

    Purpose: The objectives of this study are: (1) to investigate the dosimetric differences of the different treatment planning systems (TPS) in breast irradiation with tangential fields, and (2) to study the effect of beam characteristics on dose distributions in tangential breast irradiation with 6 MV linear accelerators from different manufacturers. Methods and Materials: Nine commercial and two university-based TPS are evaluated in this study. The computed tomographic scan of three representative patients, labeled as 'small', 'medium' and 'large' based on their respective chest wall separations in the central axis plane (CAX) were used. For each patient, the tangential fields were set up in each TPS. The CAX distribution was optimized separately with lung correction, for each TPS based on the same set of optimization conditions. The isodose distributions in two other off-axis planes, one 6 cm cephalic and the other 6 cm caudal to the CAX plane were also computed. To investigate the effect of beam characteristics on dose distributions, a three-dimensional TPS was used to calculate the isodose distributions for three different linear accelerators, the Varian Clinac 6/100, the Siemens MD2 and the Philips SL/7 for the three patients. In addition, dose distributions obtained with 6 MV X-rays from two different accelerators, the Varian Clinac 6/100 and the Varian 2100C, were compared. Results: For all TPS, the dose distributions in all three planes agreed qualitatively to within ± 5% for the 'small' and the 'medium' patients. For the 'large' patient, all TPS agreed to within ± 4% on the CAX plane. The isodose distributions in the caudal plane differed by ± 5% among all TPS. In the cephalic plane in which the patient separation is much larger than that in the CAX plane, six TPS correctly calculated the dose distribution showing a cold spot in the center of the breast contour. The other five TPS showed that the center of the breast received adequate dose. Isodose

  11. Philosophy of Technology Assumptions in Educational Technology Leadership

    Science.gov (United States)

    Webster, Mark David

    2017-01-01

    A qualitative study using grounded theory methods was conducted to (a) examine what philosophy of technology assumptions are present in the thinking of K-12 technology leaders, (b) investigate how the assumptions may influence technology decision making, and (c) explore whether technological determinist assumptions are present. Subjects involved…

  12. 41 CFR 60-3.9 - No assumption of validity.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true No assumption of validity... assumption of validity. A. Unacceptable substitutes for evidence of validity. Under no circumstances will the... of it's validity be accepted in lieu of evidence of validity. Specifically ruled out are: assumptions...

  13. The zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Etienne, R.S.; Alonso, D.; McKane, A.J.

    2007-01-01

    The neutral theory of biodiversity as put forward by Hubbell in his 2001 monograph has received much criticism for its unrealistic simplifying assumptions. These are the assumptions of functional equivalence among different species (neutrality), the assumption of point mutation speciation, and the

  14. Modelling tangential discontinuities at the Magnetopause with the new Energy Conserving Moment Implicit Method

    Science.gov (United States)

    Boella, Elisabetta; Micera, Alfredo; Gonzalez-Herrero, Diego; Innocenti, Maria Elena; Lapenta, Giovanni

    2017-10-01

    Kinetic modeling of heliospheric plasmas is computationally very challenging due to the simultaneous presence of micro and macroscopic scales, which are often interconnected. As a consequence, simulations are expensive and hard to deploy within the existing Particle-In-Cell techniques, being them explicit, implicit or semi-implicit. Very recently we have developed a new semi-implicit algorithm, which is perfectly energy-conserving and as such, stable and accurate over a wide range of temporal and spatial resolutions. In this work, we are going to describe the main steps that led to this great breakthrough and report the implementation of the method in a new massively parallel code, called ECsim. The new approach is then employed to investigate tangential discontinuities (TD) at the magnetopause. Two and three-dimensional simulations of TDs are carried out over MHD time scales, retaining a kinetic description for both electrons and ions with a realistic charge to mass ratio. The formation of a high-energy tail Maxwellian is observed in the distribution function of the electrons on the Earth side. This leads to a crescent-shaped distribution in the plane perpendicular to the magnetic field, in agreement with recent observations of the Magnetospheric Multiscale (MMS) mission.

  15. Cavitation control on a 2D hydrofoil through a continuous tangential injection of liquid: Experimental study

    Science.gov (United States)

    Timoshevskiy, M. V.; Zapryagaev, I. I.; Pervunin, K. S.; Markovich, D. M.

    2016-10-01

    In the paper, the possibility of active control of a cavitating flow over a 2D hydrofoil that replicates a scaled-down model of high-pressure hydroturbine guide vane (GV) was tested. The flow manipulation was implemented by a continuous tangential liquid injection at different flow rates through a spanwise slot in the foil surface. In experiments, the hydrofoil was placed in the test channel at the attack angle of 9°. Different cavitation conditions were reached by varying the cavitation number and injection velocity. In order to study time dynamics and spatial patterns of partial cavities, high-speed imaging was employed. A PIV method was used to measure the mean and fluctuating velocity fields over the hydrofoil. Hydroacoustic measurements were carried out by means of a pressure transducer to identify spectral characteristics of the cavitating flow. It was found that the present control technique is able to modify the partial cavity pattern (or even totally suppress cavitation) in case of stable sheet cavitation and change the amplitude of pressure pulsations at unsteady regimes. The injection technique makes it also possible to significantly influence the spatial distributions of the mean velocity and its turbulent fluctuations over the GV section for non-cavitating flow and sheet cavitation.

  16. Modeling of tangential synthetic jet actuators used for pitching control on an airfoil

    Science.gov (United States)

    Lopez, Omar; Moser, Robert

    2008-11-01

    Pitching moment control in an airfoil can be achieved by trapping concentrations of vorticity close to the trailing edge. Experimental work has shown that synthetic jet actuators can be used to manipulate and control this trapped vorticity. Two different approaches are used to model the action of tangential-blowing synthetic jet actuators mounted near the trailing edge of the airfoil: a detailed model and Reynolds stress synthetic jet (RSSJ) model. The detailed model resolves the synthetic jet dynamics in time while the RSSJ model tries to capture the major effects of the synthetic jet by modeling the changes in the Reynolds stress induced by the actuator, based on experimental PIV data and numerical results from the detailed model. Both models along with the CFD computations in which they are embedded are validated against experimental data. The synthetic jet models have been developed to simulate closed loop flow control of the pitching and plunging of the airfoil, and to this end the RSSJ model is particularly useful since it reduces (by an order of magnitude) the cost of simulating the long-term evolution of the system under control.

  17. Purification of infectious adenovirus in two hours by ultracentrifugation and tangential flow filtration

    International Nuclear Information System (INIS)

    Ugai, Hideyo; Yamasaki, Takahito; Hirose, Megumi; Inabe, Kumiko; Kujime, Yukari; Terashima, Miho; Liu, Bingbing; Tang, Hong; Zhao, Mujun; Murata, Takehide; Kimura, Makoto; Pan, Jianzhi; Obata, Yuichi; Hamada, Hirofumi; Yokoyama, Kazunari K.

    2005-01-01

    Adenoviruses are excellent vectors for gene transfer and are used extensively for high-level expression of the products of transgenes in living cells. The development of simple and rapid methods for the purification of stable infectious recombinant adenoviruses (rAds) remains a challenge. We report here a method for the purification of infectious adenovirus type 5 (Ad5) that involves ultracentrifugation on a cesium chloride gradient at 604,000g for 15 min at 4 deg C and tangential flow filtration. The entire procedure requires less than two hours and infectious Ad5 can be recovered at levels higher than 64% of the number of plaque-forming units (pfu) in the initial crude preparation of viruses. We have obtained titers of infectious purified Ad5 of 1.35 x 10 10 pfu/ml and a ratio of particle titer to infectious titer of seven. The method described here allows the rapid purification of rAds for studies of gene function in vivo and in vitro, as well as the rapid purification of Ad5

  18. The fall of the Northern Unicorn: tangential motions in the Galactic anticentre with SDSS and Gaia

    Science.gov (United States)

    de Boer, T. J. L.; Belokurov, V.; Koposov, S. E.

    2018-01-01

    We present the first detailed study of the behaviour of the stellar proper motion across the entire Galactic anticentre area visible in the Sloan Digital Sky Survey (SDSS) data. We use recalibrated SDSS astrometry in combination with positions from Gaia DR1 to provide tangential motion measurements with a systematic uncertainty <5 km s-1 for the Main Sequence stars at the distance of the Monoceros Ring. We demonstrate that Monoceros members rotate around the Galaxy with azimuthal speeds of ∼230 km s-1, only slightly lower than that of the Sun. Additionally, both vertical and azimuthal components of their motion are shown to vary considerably but gradually as a function of Galactic longitude and latitude. The stellar overdensity in the anti-centre region can be split into two components, the narrow, stream-like ACS and the smooth Ring. According to our analysis, these two structures show very similar but clearly distinct kinematic trends, which can be summarized as follows: the amplitude of the velocity variation in vϕ and vz in the ACS is higher compared to the Ring, whose velocity gradients appear to be flatter. Currently, no model available can explain the entirety of the data in this area of the sky. However, the new accurate kinematic map introduced here should provide strong constraints on the genesis of the Monoceros Ring and the associated substructure.

  19. SIMULATION OF FRICTIONAL DISSIPATION UNDER BIAXIAL TANGENTIAL LOADING WITH THE METHOD OF DIMENSIONALITY REDUCTION

    Directory of Open Access Journals (Sweden)

    Andrey V. Dimaki

    2017-08-01

    Full Text Available The paper is concerned with the contact between the elastic bodies subjected to a constant normal load and a varying tangential loading in two directions of the contact plane. For uni-axial in-plane loading, the Cattaneo-Mindlin superposition principle can be applied even if the normal load is not constant but varies as well. However, this is generally not the case if the contact is periodically loaded in two perpendicular in-plane directions. The applicability of the Cattaneo-Mindlin superposition principle guarantees the applicability of the method of dimensionality reduction (MDR which in the case of a uni-axial in-plane loading has the same accuracy as the Cattaneo-Mindlin theory. In the present paper we investigate whether it is possible to generalize the procedure used in the MDR for bi-axial in-plane loading. By comparison of the MDR-results with a complete three-dimensional numeric solution, we arrive at the conclusion that the exact mapping is not possible. However, the inaccuracy of the MDR solution is on the same order of magnitude as the inaccuracy of the Cattaneo-Mindlin theory itself. This means that the MDR can be also used as a good approximation for bi-axial in-plane loading.

  20. Contact models of repaired articular surfaces: influence of loading conditions and the superficial tangential zone.

    Science.gov (United States)

    Owen, John R; Wayne, Jennifer S

    2011-07-01

    The superficial tangential zone (STZ) plays a significant role in normal articular cartilage's ability to support loads and retain fluids. To date, tissue engineering efforts have not replicated normal STZ function in cartilage repairs. This finite element study examined the STZ's role in normal and repaired articular surfaces under different contact conditions. Contact area and pressure distributions were allowed to change with time, tension-compression nonlinearity modeled collagen behavior in the STZ, and nonlinear geometry was incorporated to accommodate finite deformation. Responses to loading via impermeable and permeable rigid surfaces were compared to loading via normal cartilage, a more physiologic condition, anticipating the two rigid loading surfaces would bracket that of normal. For models loaded by normal cartilage, an STZ placed over the inferior repair region reduced the short-term axial compression of the articular surface by 15%, when compared to a repair without an STZ. Covering the repair with a normal STZ shifted the flow patterns and strain levels back toward that of normal cartilage. Additionally, reductions in von Mises stress (21%) and an increase in fluid pressure (13%) occurred in repair tissue under the STZ. This continues to show that STZ properties of sufficient quality are likely critical for the survival of transplanted constructs in vivo. However, response to loading via normal cartilage did not always fall within ranges predicted by the rigid surfaces. Use of more physiologic contact models is recommended for more accurate investigations into properties critical to the success of repair tissues.

  1. A tangentially viewing visible TV system for the DIII-D divertor

    International Nuclear Information System (INIS)

    Fenstermacher, M.E.; Meyer, W.H.; Wood, R.D.; Nilson, D.G.; Ellis, R.; Brooks, N.H.

    1997-01-01

    A video camera system has been installed on the DIII-D tokamak for two-dimensional spatial studies of line emission in the lower divertor region. The system views the divertor tangentially at approximately the height of the X point through an outer port. At the tangency plane, the entire divertor from the inner wall to outside the DIII-D bias ring is viewed with spatial resolution of ∼1 cm. The image contains information from ∼90 deg of toroidal angle. In a recent upgrade, remotely controllable filter changers were added which have produced images from nominally identical discharges using different spectral lines. Software was developed to calculate the response function matrix of the optical system using distributed computing techniques and assuming toroidal symmetry. Standard sparse matrix algorithms are then used to invert the three-dimensional images onto a poloidal plane. Spatial resolution of the inverted images is 2 cm; higher resolution simply increases the size of the response function matrix. Initial results from a series of experiments with multiple identical discharges show that the emission from CII and CIII, which appears along the inner scrape-off layer above and below the X point during ELMing H mode, moves outward and becomes localized near the X point in radiative divertor operation induced by deuterium injection. copyright 1997 American Institute of Physics

  2. Expedient set-up of tangential breast fields with a simple gantry attachment.

    Science.gov (United States)

    Brezovich, I A; Meredith, R F; Weppelmann, B; Pareek, P N; Salter, M M

    1991-09-01

    A novel technique for setting up tangential fields is described. The technique uses a simple device (Breast Aligner) which attaches to the collimator of the treatment unit. The function of the Breast Aligner is similar to conventional front and back pointers except that the beam edge rather than central ray is defined. By delineating beam entrance and exit points at the posterior field edge, the device greatly simplifies and expedites set-up, and enhances precision of port alignment. Additional advantageous features include: (a) the ability to compensate for small inadvertent variations from the initial set-up position or for patient movement between the set-up of opposing ports, (b) the ability to visually check port alignment in the treatment position immediately before irradiation, and (c) decreased chance of human and equipment error by eliminating the need for measurements and calculations at the time of treatment. Our method can be used for SSD or SAD techniques and, with minor adjustment, is applicable for establishment of coplanar cephalad field borders as required at the junction of a supraclavicular field.

  3. Generating compensation designs for tangential breast irradiation with artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Gulliford, S. [Joint Department of Physics, Institute of Cancer Research and Royal Marsden Hospital NHS Trust, Surrey (United Kingdom)]. E-mail: sarahg@icr.ac.uk; Corne, D. [Department of Computer Science, School of Computer Science, Cybernetics and Electronic Engineering, University of Reading, Berkshire (United Kingdom); Rowbottom, C.; Webb, S. [Joint Department of Physics, Institute of Cancer Research and Royal Marsden Hospital NHS Trust, Surrey (United Kingdom)

    2002-01-21

    In this paper we discuss a study comparing an algorithm implemented clinically to design intensity-modulated fields with two artificial neural networks (ANNs) trained to design the same fields. The purpose of the algorithm is to produce compensation for tangential breast radiotherapy in order to improve dose homogeneity. This was achieved by creating intensity-modulated fields to supplement standard wedged fields. Portal image data were used to create thickness maps of the medial and lateral fields, which in turn were used to design the wedged and intensity-modulated fields. The ANNs were developed to design the intensity-modulated fields from the portal image data and corresponding fluence map alone. One used localized groups of portal image pixels related to the fluence map (method 2), and the other used a one-to-one mapping between spatially corresponding pixels (method 3). A dosimetric comparison of the methods was performed by calculating the overall dose distribution. The volume of tissue outside the dose range 95-105% was used to assess dose homogeneity. The average volume outside 95-105%, averaged over 80 cases, was shown to be 2.3% for the algorithm, whilst average values of 9.9% and 13.5% were obtained for methods 2 and 3, respectively. The results of this study demonstrate the ability of an ANN to learn the general shape of compensation required and explore the use of image-based ANNs in the design of intensity-modulated fields. (author)

  4. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  5. [Efficacies of treating large area third-degree burns by tangential excision and skin grafting for subcutaneous tissue wounds].

    Science.gov (United States)

    Song, Guodong; Jia, Jun; Ma, Yindong; Shi, Wen; Wang, Fang; Li, Peilong; Gao, Cong; Zuo, Haibin; Fan, Chunjie; Yang, Tao; Wu, Qiuhe; Shao, Yang

    2014-12-02

    To explore the efficacies of treating patients with large area third-degree burns by tangential excision and skin grafting for subcutaneous tissue wounds. From January 2002 to December 2013, the medical records were retrospectively reviewed for 31 consecutive adult patients with a third-degree burn area exceeding 70% and undergoing tangential excision and skin grafting on subcutaneous tissue wound (TESGSTW) for the first time within 7 days postburn at Burn Center, Affiliated Jinan Central Hospital, Shandong University. For 31 patients, wounds with relative integrity eschar underwent TESGSTW by stages. Tourniquet was not used in some extremities. The relevant clinical data including patient condition on admission, causes of death, blood loss of tangential excision wound, surgical procedures and efficacies in cured group were analyzed. Average age, burn total area and third-degree burn area of 31 patients were (32.4 ± 12.8) years, (89.0 ± 6.2)% and (80.4 ± 7.6)% respectively. There were inhalation injury (n = 25, 80.6%) and early-stage shock before hospitalization (n = 22, 71.0%). Among 18 cured ones (58.1%), 2 patients had a third-degree burn area ≥ 90%. And 13 patients (41.9%) died and 10 of them died at 4 to 19 days postburn. Burn area was a risk factor of burn mortality. Sepsis and multiple organ dysfunction syndrome (MODS) were major mortality causes. Four patients died from early-stage sepsis. Within 14 days postburn, average blood loss volume per 1% tangential excision area in non-tourniquet group was slightly higher than that in the tourniquet group, but the difference was insignificant. For 18 cured patients, TESGSTW were performed 41 times. For 14 patients (77.8%), TESGSTW was performed twice. The average time of the first tangential excision was (4.1 ± 0.6) days postburn, the time interval between the first two tangential excisions was (6.4 ± 2.0) days, the first tangential excision area (33.8 ± 7.6)% and accumulated tangential excision area (58

  6. Contralateral breast doses depending on treatment set-up positions for left-sided breast tangential irradiation

    International Nuclear Information System (INIS)

    Joo, Chan Seong; Park, Su Yeon; Kim, Jong Sik; Choi, Byeong Gi; Chung, Yoon Sun; Park, Won

    2015-01-01

    To evaluate Contralateral Breast Doses with Supine and Prone Positions for tangential Irradiation techniques for left-sided breast Cancer We performed measurements for contralateral doses using Human Phantom at each other three plans (conventional technique, Field-in-Field, IMRT, with prescription of 50 Gy/25fx). For the measurement of contralateral doses we used Glass dosimeters on the 4 points of Human Phantom surface (0 mm, 10 mm, 30 mm, 50 mm). For the position check at every measurements, we had taken portal images using EPID and denoted the incident points on the human phantom for checking the constancy of incident points. The contralateral doses in supine position showed a little higher doses than those in prone position. In the planning study, contralateral doses in the prone position increased mean doses of 1.2% to 1.8% at each positions while those in the supine positions showed mean dose decreases of 0.8% to 0.9%. The measurements using glass dosimeters resulted in dose increases (mean: 2.7%, maximum: 4% of the prescribed dose) in the prone position. In addition, the delivery techniques of Field-in-field and IMRT showed mean doses of 3% higher than conventional technique. We evaluated contralateral breast doses depending on different positions of supine and prone for tangential irradiations. For the phantom simulation of set-up variation effects on contralateral dose evaluation, although we used humanoid phantom for planning and measurements comparisons, it would be more or less worse set-up constancy in a real patient. Therefore, more careful selection of determination of patient set-up for the breast tangential irradiation, especially in the left-sided breast, should be considered for unwanted dose increases to left lung and heart. In conclusion, intensive patient monitoring and improved patient set-up verification efforts should be necessary for the application of prone position for tangential irradiation of left-sided breast cancer

  7. Measurements of beam-ion confinement during tangential beam-driven instabilities in PBX [Princeton Beta Experiment

    International Nuclear Information System (INIS)

    Heidbrink, W.W.; Kaita, R.; Takahashi, H.; Gammel, G.; Hammett, G.W.; Kaye, S.

    1987-01-01

    During tangential injection of neutral beams into low density tokamak plasmas with β > 1% in the Princeton Beta Experiment (PBX), instabilities are observed that degrade the confinement of beam ions. Neutron, charge-exchange, and diamagnetic loop measurements are examined in order to identify the mechanism or mechanisms responsible for the beam-ion transport. The data suggest a resonant interaction between the instabilities and the parallel energetic beam ions. Evidence for some nonresonant transport also exists

  8. Flow field and thermal characteristics in a model of a tangentially fired furnace under different conditions of burner tripping

    Science.gov (United States)

    Habib, M. A.; Ben-Mansour, R.; Antar, M. A.

    2005-08-01

    Tangentially fired furnaces are vortex-combustion units and are widely used in steam generators of industrial plants. The present study provides a numerical investigation of the problem of turbulent reacting flows in a model furnace of a tangentially fired boiler. The importance of this problem is mainly due to its relation to large boiler furnaces used in thermal power plants. In the present work, calculation of the flow field, temperature and species concentration-contour maps in a tangentially-fired model furnace are provided. The safety of these furnaces requires that the burner be tripped (its fuel is cut off) if the flame is extinguished. Therefore, the present work provides an investigation of the influence of number of tripped burners on the characteristics of the flow and thermal fields. The details of the flow, thermal and combustion fields are obtained from the solution of the conservation equations of mass, momentum and energy and transport equations for scalar variables in addition to the equations of the turbulence model. Available experimental measurements were used for validating the calculation procedure. The results show that the vortex created due to pressure gradient at the furnace center only influenced by tripping at least two burners. However, the temperature distributions are significantly distorted by tripping any of the burners. Regions of very high temperature close to the furnace walls appear as a result of tripping the fuel in one or two of the burners. Calculated heat flux along the furnace walls are presented.

  9. Continuous countercurrent tangential chromatography for mixed mode post-capture operations in monoclonal antibody purification.

    Science.gov (United States)

    Dutta, Amit K; Fedorenko, Dmitriy; Tan, Jasmine; Costanzo, Joseph A; Kahn, David S; Zydney, Andrew L; Shinkazh, Oleg

    2017-08-18

    Continuous Countercurrent Tangential Chromatography (CCTC) has been shown to demonstrate significant advantages over column chromatography including higher productivity, lower operational pressure, disposable flow path, and lower resin use. Previous applications of CCTC have been limited to initial capture of monoclonal antibodies (mAb) from clarified cell culture harvest. In this present article, a CCTC system was designed and tested for a post-capture antibody purification step. Mixed mode cation exchange-hydrophobic interaction chromatography resins with two different particle sizes were used to reduce host cell protein (HCP), leached protein A, DNA, and aggregates from a mAb stream after a protein A operation. Product output from CCTC was obtained at a steady-state concentration in sharp contrast to the periodic output of product in multi-column systems. The results show up to 101g of mAb/L of resin/hr productivity, which is 10× higher than in a batch column. A 5% yield increase (95% with CCTC vs. 90% in batch column) resulted from optimizing elution pH within a narrow operational window (pH 4-4.5). Contaminant removal was found to be similar to conventional column performance. Data obtained with the smaller particle size resin showed faster binding kinetics leading to reduced CCTC system volume and increased productivity. Buffer and water usage were modeled to show potential for utilization of in-line mixing and buffer tank volume reduction. The experimental results were used to perform a scale up exercise that predicts a compact CCTC flow path for 500 and 2000L batches using commercially available membranes. These results demonstrate the potential of using CCTC for post-capture operations as an alternative to packed bed chromatography, and provide a framework for the design and development of an integrated continuous bioprocessing platform based on CCTC technology. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. VITRECTOMY FOR INTERMEDIATE AGE-RELATED MACULAR DEGENERATION ASSOCIATED WITH TANGENTIAL VITREOMACULAR TRACTION: A CLINICOPATHOLOGIC CORRELATION.

    Science.gov (United States)

    Ziada, Jean; Hagenau, Felix; Compera, Denise; Wolf, Armin; Scheler, Renate; Schaumberger, Markus M; Priglinger, Siegfried G; Schumann, Ricarda G

    2018-03-01

    To describe the morphologic characteristics of the vitreomacular interface in intermediate age-related macular degeneration associated with tangential traction due to premacular membrane formation and to correlate with optical coherence tomography (OCT) findings and clinical data. Premacular membrane specimens were removed sequentially with the internal limiting membrane from 27 eyes of 26 patients with intermediate age-related macular degeneration during standard vitrectomy. Specimens were processed for immunocytochemical staining of epiretinal cells and extracellular matrix components. Ultrastructural analysis was performed using transmission electron microscopy. Spectral domain optical coherence tomography images and patient charts were evaluated in retrospect. Immunocytochemistry revealed hyalocytes and myofibroblasts as predominant cell types. Ultrastructural analysis demonstrated evidence of vitreoschisis in all eyes. Myofibroblasts with contractile properties were observed to span between folds of the internal limiting membrane and vitreous cortex collagen. Retinal pigment epithelial cells or inflammatory cells were not detected. Mean visual acuity (Snellen) showed significant improvement from 20/72 ± 20/36 to 20/41 ± 20/32 (P < 0.001) after a mean follow-up period of 19 months (median, 17 months). During this period, none of the eyes required anti-vascular endothelial growth factor therapy. Fibrocellular premacular proliferation in intermediate age-related macular degeneration predominantly consists of vitreous collagen, hyalocytes, and myofibroblasts with contractile properties. Vitreoschisis and vitreous-derived cells appear to play an important role in traction formation of this subgroup of eyes. In patients with intermediate age-related macular degeneration and contractile premacular membrane, release of traction by vitrectomy with internal limiting membrane peeling results in significantly functional and anatomical improvement.

  11. Detecting tangential dislocations on planar faults from traction free surface observations

    International Nuclear Information System (INIS)

    Ionescu, Ioan R; Volkov, Darko

    2009-01-01

    We propose in this paper robust reconstruction methods for tangential dislocations on planar faults. We assume that only surface observations are available, and that a traction free condition applies at that surface. This study is an extension to the full three dimensions of Ionescu and Volkov (2006 Inverse Problems 22 2103). We also explore in this present paper the possibility of detecting slow slip events (such as silent earthquakes, or earthquake nucleation phases) from GPS observations. Our study uses extensively an asymptotic estimate for the observed surface displacement. This estimate is first used to derive what we call the moments reconstruction method. Then it is also used for finding necessary conditions for a surface displacement field to have been caused by a slip on a fault. These conditions lead to the introduction of two parameters: the activation factor and the confidence index. They can be computed from the surface observations in a robust fashion. They indicate whether a measured displacement field is due to an active fault. We also infer a second, combined, reconstruction technique blending least square minimization and the moments method. We carefully assess how our reconstruction method is affected by the sensitivity of the observation apparatus and the stepsize for the grid of surface observation points. The maximum permissible stepsize for such a grid is computed for different values of fault depth and orientation. Finally we present numerical examples of reconstruction of faults. We demonstrate that our combined method is sharp, robust and computationally inexpensive. We also note that this method performs satisfactorily for shallow faults, despite the fact that our asymptotic formula deteriorates in that case

  12. Injector Element which Maintains a Constant Mean Spray Angle and Optimum Pressure Drop During Throttling by Varying the Geometry of Tangential Inlets

    Science.gov (United States)

    Trinh, Huu P. (Inventor); Myers, William Neill (Inventor)

    2014-01-01

    A method for determining the optimum inlet geometry of a liquid rocket engine swirl injector includes obtaining a throttleable level phase value, volume flow rate, chamber pressure, liquid propellant density, inlet injector pressure, desired target spray angle and desired target optimum delta pressure value between an inlet and a chamber for a plurality of engine stages. The tangential inlet area for each throttleable stage is calculated. The correlation between the tangential inlet areas and delta pressure values is used to calculate the spring displacement and variable inlet geometry. An injector designed using the method includes a plurality of geometrically calculated tangential inlets in an injection tube; an injection tube cap with a plurality of inlet slots slidably engages the injection tube. A pressure differential across the injector element causes the cap to slide along the injection tube and variably align the inlet slots with the tangential inlets.

  13. Legal assumptions for private company claim for additional (supplementary payment

    Directory of Open Access Journals (Sweden)

    Šogorov Stevan

    2011-01-01

    Full Text Available Subject matter of analyze in this article are legal assumptions which must be met in order to enable private company to call for additional payment. After introductory remarks discussion is focused on existence of provisions regarding additional payment in formation contract, or in shareholders meeting general resolution, as starting point for company's claim. Second assumption is concrete resolution of shareholders meeting which creates individual obligations for additional payments. Third assumption is defined as distinctness regarding sum of payment and due date. Sending of claim by relevant company body is set as fourth legal assumption for realization of company's right to claim additional payments from member of private company.

  14. Systematic analysis of silver nanoparticle ionic dissolution by tangential flow filtration: toxicological implications.

    Science.gov (United States)

    Maurer, Elizabeth I; Sharma, Monita; Schlager, John J; Hussain, Saber M

    2014-11-01

    In the field of toxicology of nanomaterials, scientists have not clearly determined if the observed toxicological events are due to the nanoparticles (NPs) themselves or the dissolution of ions released into the biophysiological environment or both phenomenon participate in combination based upon their bioregional and temporal occurrence during exposure conditions. Consequently, research involving the toxicological analysis of silver NPs (Ag-NPs) has shifted towards assessment of 'nanosized' silver in comparison to its solvated 'ionic' counterpart. Current literature suggests that dissolution of ions from Ag-NPs may play a key role in toxicity; however, the present assessment methodology to separate ions from NPs still requires improvement before a definitive cause of toxicity can be determined. Recently, centrifugation-based techniques have been employed to obtain solvated ions from the NP solution, but this approach leads to NP agglomeration, making further toxicological analysis difficult to assess. Additionally, extremely small NPs are retained in the supernatant even after ultracentrifugation, leading to incomplete separation of ions from their respective NPs. To address these complex toxicology issues we applied enhanced separation techniques with the aim to study levels of ions originating from the Ag-NP using separation by a recirculating tangential flow filtration system. This system uses a unique diffusion-driven filtration method that retains large particles within the continuous flow path, while allowing the solution (ions) to pass through molecular filters by lateral diffusion separation. Use of this technique provides reproducible NP separation from their solvated ions which permits for further quantification using an inductively coupled plasma mass spectrometry or comparison use in bioassay exposures to biological systems. In this study, we thoroughly characterised NPs in biologically relevant solutions to understand the dissolution of Ag-NPs (10 and

  15. 40 CFR 264.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... FACILITIES Financial Requirements § 264.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure, post-closure care, or... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  16. 40 CFR 261.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... Excluded Hazardous Secondary Materials § 261.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure or liability... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  17. 40 CFR 265.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ..., STORAGE, AND DISPOSAL FACILITIES Financial Requirements § 265.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the... 40 Protection of Environment 25 2010-07-01 2010-07-01 false State assumption of responsibility...

  18. 40 CFR 144.66 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) UNDERGROUND INJECTION CONTROL PROGRAM Financial Responsibility: Class I Hazardous Waste Injection Wells § 144.66 State assumption of responsibility. (a) If a State either assumes legal... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State assumption of responsibility...

  19. 40 CFR 267.150 - State assumption of responsibility.

    Science.gov (United States)

    2010-07-01

    ... STANDARDIZED PERMIT Financial Requirements § 267.150 State assumption of responsibility. (a) If a State either assumes legal responsibility for an owner's or operator's compliance with the closure care or liability... 40 Protection of Environment 26 2010-07-01 2010-07-01 false State assumption of responsibility...

  20. 40 CFR 761.2 - PCB concentration assumptions for use.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false PCB concentration assumptions for use..., AND USE PROHIBITIONS General § 761.2 PCB concentration assumptions for use. (a)(1) Any person may..., oil-filled cable, and rectifiers whose PCB concentration is not established contain PCBs at < 50 ppm...

  1. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    We propose a class of finite state systems of synchronizing distributed processes, where processes make assumptions at local states about the state of other processes in the system. This constrains the global states of the system to those where assumptions made by a process about another are compatible with the ...

  2. Basic assumptions in statistical analyses of data in biomedical ...

    African Journals Online (AJOL)

    If one or more assumptions are violated, an alternative procedure must be used to obtain valid results. This article aims at highlighting some basic assumptions in statistical analyses of data in biomedical sciences. Keywords: samples, independence, non-parametric, parametric, statistical analyses. Int. J. Biol. Chem. Sci. Vol.

  3. 29 CFR 1607.9 - No assumption of validity.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false No assumption of validity. 1607.9 Section 1607.9 Labor... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.9 No assumption of validity. A. Unacceptable substitutes for evidence of validity. Under no circumstances will the general reputation of a test or other...

  4. PFP issues/assumptions development and management planning guide

    International Nuclear Information System (INIS)

    SINCLAIR, J.C.

    1999-01-01

    The PFP Issues/Assumptions Development and Management Planning Guide presents the strategy and process used for the identification, allocation, and maintenance of an Issues/Assumptions Management List for the Plutonium Finishing Plant (PFP) integrated project baseline. Revisions to this document will include, as attachments, the most recent version of the Issues/Assumptions Management List, both open and current issues/assumptions (Appendix A), and closed or historical issues/assumptions (Appendix B). This document is intended be a Project-owned management tool. As such, this document will periodically require revisions resulting from improvements of the information, processes, and techniques as now described. Revisions that suggest improved processes will only require PFP management approval

  5. SU-E-T-309: Tangential Modulated Arc Therapy: A Novel Technique for the Treatment of Superficial Disease

    Energy Technology Data Exchange (ETDEWEB)

    Hadsell, M; Chin, E; Li, R; Xing, L; Bush, K [Stanford University Cancer Center, Stanford, CA (United States)

    2014-06-01

    Purpose: We propose a new type of treatment that employs a modulated and sliding tangential photon field to provide superior coverage of superficial targets when compared to other commonly employed methods while drastically reducing dose to the underlying sensitive structures often present in these cases. Methods: Modulated treatment plans were formulated for a set of three representative cases. The first was a revised treatment of a scalp sarcoma, while the second was a treatment of a right posterior chest wall sarcoma. For these cases, asymmetric jaw placement, angular limitations, and central isocenter placements were used to force the optimization algorithm into finding solutions with beamlines that were not perpendicular to the body surface. The final case targeted the chest wall of a breast cancer patient, in which standard treatments were compared to the use of modulated fields with multiple isocenters along the chest wall. Results: When compared with unrestricted modulated arcs, the tangential arc scalp treatment reduced the max and mean doses delivered to the brain by 33Gy (from 55 to 22Gy) and 6Gy (from 14Gy to 8Gy), respectively. In the right posterior chest wall case, the V10 in the ipsilateral lung was kept below 5% while retaining a Rx dose (45Gy) target coverage of over 97%. For the breast case, the modulated plan achieved reductions in high dose to the ipsilateral lung and heart by a factor of 2–3 when compared to classic laterally opposed tangents and reduced the V5 by 40% when compared to standard modulated arcs. Conclusion: Tangential modulated arc therapy has outperformed the conventional modalities of treatment for superficial lesions used in our clinic. We hope that with the advent of digitally controlled linear accelerators, we can uncover further benefits of this new technique and extend its applicability to a wider section of the patient population.

  6. SU-E-T-309: Tangential Modulated Arc Therapy: A Novel Technique for the Treatment of Superficial Disease

    International Nuclear Information System (INIS)

    Hadsell, M; Chin, E; Li, R; Xing, L; Bush, K

    2014-01-01

    Purpose: We propose a new type of treatment that employs a modulated and sliding tangential photon field to provide superior coverage of superficial targets when compared to other commonly employed methods while drastically reducing dose to the underlying sensitive structures often present in these cases. Methods: Modulated treatment plans were formulated for a set of three representative cases. The first was a revised treatment of a scalp sarcoma, while the second was a treatment of a right posterior chest wall sarcoma. For these cases, asymmetric jaw placement, angular limitations, and central isocenter placements were used to force the optimization algorithm into finding solutions with beamlines that were not perpendicular to the body surface. The final case targeted the chest wall of a breast cancer patient, in which standard treatments were compared to the use of modulated fields with multiple isocenters along the chest wall. Results: When compared with unrestricted modulated arcs, the tangential arc scalp treatment reduced the max and mean doses delivered to the brain by 33Gy (from 55 to 22Gy) and 6Gy (from 14Gy to 8Gy), respectively. In the right posterior chest wall case, the V10 in the ipsilateral lung was kept below 5% while retaining a Rx dose (45Gy) target coverage of over 97%. For the breast case, the modulated plan achieved reductions in high dose to the ipsilateral lung and heart by a factor of 2–3 when compared to classic laterally opposed tangents and reduced the V5 by 40% when compared to standard modulated arcs. Conclusion: Tangential modulated arc therapy has outperformed the conventional modalities of treatment for superficial lesions used in our clinic. We hope that with the advent of digitally controlled linear accelerators, we can uncover further benefits of this new technique and extend its applicability to a wider section of the patient population

  7. Use of the Far Infrared Tangential Interferometer/Polarimeter diagnostic for the study of rf driven plasma waves on NSTX.

    Science.gov (United States)

    Kim, J; Lee, K C; Kaita, R; Phillips, C K; Domier, C W; Valeo, E; Luhmann, N C; Bonoli, P T; Park, H

    2010-10-01

    A rf detection system for waves in the 30 MHz range has been constructed for the Far Infrared Tangential Interferometer/Polarimeter on National Spherical Torus Experiment (NSTX). It is aimed at monitoring high frequency density fluctuations driven by 30 MHz high harmonic fast wave fields. The levels of density fluctuations at various radial chords and antenna phase angles can be estimated using the electric field calculated by TORIC code and linearized continuity equation for the electron density. In this paper, the experimental arrangement for the detection of rf signal and preliminary results of simulation will be discussed.

  8. Mathematical simulation of heat and mass transfer in a cylindrical channel versus the tangential momentum accommodation coefficient

    Science.gov (United States)

    Germider, O. V.; Popov, V. N.

    2017-11-01

    The process of heat and mass transfer in a long cylindrical channel has been considered in terms of the mirror-diffuse model of the Maxwell boundary condition. The Hazen-Williams equation is used as a basic equation of the process kinetics. A constant temperature gradient is maintained in the channel. The heat and mass fluxes through the cross section of the channel versus the tangential momentum accommodation coefficient have been calculated in a wide range of the Knudsen number. The heat flux profiles have been constructed. A comparison with relevant published data has been carried out.

  9. Numerical Analysis of a New Pressure Sensor for Measuring Normal and Tangential Stresses in the Roll Gap

    DEFF Research Database (Denmark)

    Presz, Wojtek P.; Wanheim, Tarras

    2003-01-01

    The paper is in polish. Orig. title: "Analiza numeryczna nowego czujnika do pomiaru nacisków i naprê¿eñ stycznych w procesie walcowania" A new strain gauge sensor for measuring normal and tangential stresses in the contact arc of a rolling process has been designed and constructed. The complicated...... load history of the sensor results in complicated deformation patterns of it, and consequently the calibration procedure of the sensor should cover a wide range of loading cases, and would thus be very difficult and time-consuming to carry out. As an alternative to this, a FEM simulative experiment has...

  10. Assumptions and Policy Decisions for Vital Area Identification Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myungsu; Bae, Yeon-Kyoung; Lee, Youngseung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    U.S. Nuclear Regulatory Commission and IAEA guidance indicate that certain assumptions and policy questions should be addressed to a Vital Area Identification (VAI) process. Korea Hydro and Nuclear Power conducted a VAI based on current Design Basis Threat and engineering judgement to identify APR1400 vital areas. Some of the assumptions were inherited from Probabilistic Safety Assessment (PSA) as a sabotage logic model was based on PSA logic tree and equipment location data. This paper illustrates some important assumptions and policy decisions for APR1400 VAI analysis. Assumptions and policy decisions could be overlooked at the beginning stage of VAI, however they should be carefully reviewed and discussed among engineers, plant operators, and regulators. Through APR1400 VAI process, some of the policy concerns and assumptions for analysis were applied based on document research and expert panel discussions. It was also found that there are more assumptions to define for further studies for other types of nuclear power plants. One of the assumptions is mission time, which was inherited from PSA.

  11. MONITORED GEOLOGIC REPOSITORY LIFE CYCLE COST ESTIMATE ASSUMPTIONS DOCUMENT

    International Nuclear Information System (INIS)

    R.E. Sweeney

    2001-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost (LCC) estimate and schedule update incorporating information from the Viability Assessment (VA) , License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  12. Monitored Geologic Repository Life Cycle Cost Estimate Assumptions Document

    International Nuclear Information System (INIS)

    Sweeney, R.

    2000-01-01

    The purpose of this assumptions document is to provide general scope, strategy, technical basis, schedule and cost assumptions for the Monitored Geologic Repository (MGR) life cycle cost estimate and schedule update incorporating information from the Viability Assessment (VA), License Application Design Selection (LADS), 1999 Update to the Total System Life Cycle Cost (TSLCC) estimate and from other related and updated information. This document is intended to generally follow the assumptions outlined in the previous MGR cost estimates and as further prescribed by DOE guidance

  13. Impact of different breathing conditions on the dose to surrounding normal structures in tangential field breast radiotherapy

    International Nuclear Information System (INIS)

    Prabhakar, Ramachandran; Tharmar, Ganesh; Julka, Pramod K.; Rath, Goura K.; Joshi, Rakesh C.; Bansal, Anil K.; Bisht, R.K.; Gopishankar, N.; Pant, G.S.; Thulkar, S.

    2007-01-01

    Cardiac toxicity is an important concern in tangential field breast radiotherapy. In this study, the impact of three different breathing conditions on the dose to surrounding normal structures such as heart, ipsilateral lung, liver and contralateral breast has been assessed. Thirteen patients with early breast cancer who underwent conservative surgery (nine left-sided and four right-sided breast cancer patients) were selected in this study. Spiral CT scans were performed for all the three breathing conditions, viz., deep inspiration breath-hold (DIBH), normal breathing phase (NB) and deep expiration breath-hold (DEBH). Conventional tangential fields were placed on the 3D-CT dataset, and the parameters such as V30 (volume covered by dose >30 Gy) for heart, V20 (volume covered by dose >20 Gy) for ipsilateral lung and V 50 (volume receiving >50% of the prescription dose) for heart and liver were studied. The average reduction in cardiac dose due to DIBH was 64% (range: 26.5-100%) and 74% (range: 37-100%) as compared to NB and DEBH respectively. For right breast cancer, DIBH resulted in excellent liver sparing. Our results indicate that in patients with breast cancer, delivering radiation in deep inspiration breath-hold condition can considerably reduce the dose to the surrounding normal structures, particularly heart and liver. (author)

  14. Tangential Flow Ultrafiltration Allows Purification and Concentration of Lauric Acid-/Albumin-Coated Particles for Improved Magnetic Treatment.

    Science.gov (United States)

    Zaloga, Jan; Stapf, Marcus; Nowak, Johannes; Pöttler, Marina; Friedrich, Ralf P; Tietze, Rainer; Lyer, Stefan; Lee, Geoffrey; Odenbach, Stefan; Hilger, Ingrid; Alexiou, Christoph

    2015-08-14

    Superparamagnetic iron oxide nanoparticles (SPIONs) are frequently used for drug targeting, hyperthermia and other biomedical purposes. Recently, we have reported the synthesis of lauric acid-/albumin-coated iron oxide nanoparticles SEON(LA-BSA), which were synthesized using excess albumin. For optimization of magnetic treatment applications, SPION suspensions need to be purified of excess surfactant and concentrated. Conventional methods for the purification and concentration of such ferrofluids often involve high shear stress and low purification rates for macromolecules, like albumin. In this work, removal of albumin by low shear stress tangential ultrafiltration and its influence on SEON(LA-BSA) particles was studied. Hydrodynamic size, surface properties and, consequently, colloidal stability of the nanoparticles remained unchanged by filtration or concentration up to four-fold (v/v). Thereby, the saturation magnetization of the suspension can be increased from 446.5 A/m up to 1667.9 A/m. In vitro analysis revealed that cellular uptake of SEON(LA-BSA) changed only marginally. The specific absorption rate (SAR) was not greatly affected by concentration. In contrast, the maximum temperature Tmax in magnetic hyperthermia is greatly enhanced from 44.4 °C up to 64.9 °C by the concentration of the particles up to 16.9 mg/mL total iron. Taken together, tangential ultrafiltration is feasible for purifying and concentrating complex hybrid coated SPION suspensions without negatively influencing specific particle characteristics. This enhances their potential for magnetic treatment.

  15. Purification of monoclonal antibodies from clarified cell culture fluid using Protein A capture continuous countercurrent tangential chromatography

    Science.gov (United States)

    Dutta, Amit K.; Tran, Travis; Napadensky, Boris; Teella, Achyuta; Brookhart, Gary; Ropp, Philip A.; Zhang, Ada W.; Tustian, Andrew D.; Zydney, Andrew L.; Shinkazh, Oleg

    2015-01-01

    Recent studies using simple model systems have demonstrated that Continuous Countercurrent Tangential Chromatography (CCTC) has the potential to overcome many of the limitations of conventional Protein A chromatography using packed columns. The objective of this work was to optimize and implement a CCTC system for monoclonal antibody purification from clarified Chinese Hamster Ovary (CHO) cell culture fluid using a commercial Protein A resin. Several improvements were introduced to the previous CCTC system including the use of retentate pumps to maintain stable resin concentrations in the flowing slurry, the elimination of a slurry holding tank to improve productivity, and the introduction of an “after binder” to the binding step to increase antibody recovery. A kinetic binding model was developed to estimate the required residence times in the multi-stage binding step to optimize yield and productivity. Data were obtained by purifying two commercial antibodies from two different manufactures, one with low titer (~0.67 g/L) and one with high titer (~6.9 g/L), demonstrating the versatility of the CCTC system. Host cell protein removal, antibody yields and purities were similar to that obtained with conventional column chromatography; however, the CCTC system showed much higher productivity. These results clearly demonstrate the capabilities of continuous countercurrent tangential chromatography for the commercial purification of monoclonal antibody products. PMID:25747172

  16. Public Service Co. of Colorado's NOx reduction program for pulverized coal tangentially fired 165 and 370MW utility boilers

    International Nuclear Information System (INIS)

    Hawley, R.R.; Collette, R.J.; Grusha, J.

    1990-01-01

    Public Service Co. of Colorado has made a voluntary corporate commitment to reduce NO x emissions by 20% from their major boilers in the Denver Metro Area before the end of 1991. Their two largest units in the Metro Area were chosen for retrofit with in-furnace low NO x technology - Valmont No. 5 and Cherokee No. 4. Both of these units are tangential coal fired boilers manufactured by ABB Combustion Engineering. As of this writing, Valmont No. 5 has been completed and is discussed herein. Cherokee No. 4 is scheduled to complete its Performance Guarantee testing in December of 1990. The topics of this paper include the commitment to NO x reduction, unit description, project schedule, overview of tangential firing system, pulverized coal NO x formation, low NO x concentric firing system, contribution of overfire air for NO x control, contribution of offset air nozzle tips for NO x control, contribution of flame attachment coal nozzle tips for NO x control, installation experience, performance and testing results

  17. Evaluation of a standard breast tangent technique: a dose-volume analysis of tangential irradiation using three-dimensional tools

    International Nuclear Information System (INIS)

    Krasin, Matthew; McCall, Anne; King, Stephanie; Olson, Mary; Emami, Bahman

    2000-01-01

    Purpose: A thorough dose-volume analysis of a standard tangential radiation technique has not been published. We evaluated the adequacy of a tangential radiation technique in delivering dose to the breast and regional lymphatics, as well as dose delivered to underlying critical structures. Methods and Materials: Treatment plans of 25 consecutive women with breast cancer undergoing lumpectomy and adjuvant breast radiotherapy were studied. Patients underwent two-dimensional (2D) treatment planning followed by treatment with standard breast tangents. These 2D plans were reconstructed without modification on our three-dimensional treatment planning system and analyzed with regard to dose-volume parameters. Results: Adequate coverage of the breast (defined as 95% of the target receiving at least 95% of the prescribed dose) was achieved in 16 of 25 patients, with all patients having at least 85% of the breast volume treated to 95% of the prescribed dose. Only 1 patient (4%) had adequate coverage of the Level I axilla, and no patient had adequate coverage of the Level II axilla, Level III axilla, or the internal mammary lymph nodes. Conclusion: Three-dimensional treatment planning is superior in quantification of the dose received by the breast, regional lymphatics, and critical structures. The standard breast tangent technique delivers an adequate dose to the breast but does not therapeutically treat the regional lymph nodes in the majority of patients. If coverage of the axilla or internal mammary lymph nodes is desired, alternate beam arrangements or treatment fields will be necessary

  18. Purification of monoclonal antibodies from clarified cell culture fluid using Protein A capture continuous countercurrent tangential chromatography.

    Science.gov (United States)

    Dutta, Amit K; Tran, Travis; Napadensky, Boris; Teella, Achyuta; Brookhart, Gary; Ropp, Philip A; Zhang, Ada W; Tustian, Andrew D; Zydney, Andrew L; Shinkazh, Oleg

    2015-11-10

    Recent studies using simple model systems have demonstrated that continuous countercurrent tangential chromatography (CCTC) has the potential to overcome many of the limitations of conventional Protein A chromatography using packed columns. The objective of this work was to optimize and implement a CCTC system for monoclonal antibody purification from clarified Chinese Hamster Ovary (CHO) cell culture fluid using a commercial Protein A resin. Several improvements were introduced to the previous CCTC system including the use of retentate pumps to maintain stable resin concentrations in the flowing slurry, the elimination of a slurry holding tank to improve productivity, and the introduction of an "after binder" to the binding step to increase antibody recovery. A kinetic binding model was developed to estimate the required residence times in the multi-stage binding step to optimize yield and productivity. Data were obtained by purifying two commercial antibodies from two different manufactures, one with low titer (∼ 0.67 g/L) and one with high titer (∼ 6.9 g/L), demonstrating the versatility of the CCTC system. Host cell protein removal, antibody yields and purities were similar to those obtained with conventional column chromatography; however, the CCTC system showed much higher productivity. These results clearly demonstrate the capabilities of continuous countercurrent tangential chromatography for the commercial purification of monoclonal antibody products. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. A framework for the organizational assumptions underlying safety culture

    International Nuclear Information System (INIS)

    Packer, Charles

    2002-01-01

    The safety culture of the nuclear organization can be addressed at the three levels of culture proposed by Edgar Schein. The industry literature provides a great deal of insight at the artefact and espoused value levels, although as yet it remains somewhat disorganized. There is, however, an overall lack of understanding of the assumption level of safety culture. This paper describes a possible framework for conceptualizing the assumption level, suggesting that safety culture is grounded in unconscious beliefs about the nature of the safety problem, its solution and how to organize to achieve the solution. Using this framework, the organization can begin to uncover the assumptions at play in its normal operation, decisions and events and, if necessary, engage in a process to shift them towards assumptions more supportive of a strong safety culture. (author)

  20. Different Random Distributions Research on Logistic-Based Sample Assumption

    Directory of Open Access Journals (Sweden)

    Jing Pan

    2014-01-01

    Full Text Available Logistic-based sample assumption is proposed in this paper, with a research on different random distributions through this system. It provides an assumption system of logistic-based sample, including its sample space structure. Moreover, the influence of different random distributions for inputs has been studied through this logistic-based sample assumption system. In this paper, three different random distributions (normal distribution, uniform distribution, and beta distribution are used for test. The experimental simulations illustrate the relationship between inputs and outputs under different random distributions. Thereafter, numerical analysis infers that the distribution of outputs depends on that of inputs to some extent, and this assumption system is not independent increment process, but it is quasistationary.

  1. Supporting calculations and assumptions for use in WESF safetyanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Hey, B.E.

    1997-03-07

    This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.

  2. Operation Cottage: A Cautionary Tale of Assumption and Perceptual Bias

    Science.gov (United States)

    2015-01-01

    but they can also set a lethal trap for unsuspecting mission planners , decisionmakers, and intelli- gence analysts.2 Assumptions are extremely...the planning process, but the planning staff must not become so wedded to their assumptions that they reject or overlook information that is not in...operations specialist who had served as principal planner for the Attu invasion. Major General Charles Corlett was to command the landing force, an

  3. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases......: (1) IT PPM as the top management marketplace, (2) IT PPM as the cause of social dilemmas at the lower organizational levels (3) IT PPM as polity between different organizational interests, (4) IT PPM as power relations that suppress creativity and diversity. Our metaphors can be used by practitioners...

  4. Discourses and Theoretical Assumptions in IT Project Portfolio Management

    DEFF Research Database (Denmark)

    Hansen, Lars Kristian; Kræmmergaard, Pernille

    2014-01-01

    articles across various research disciplines. We find and classify a stock of 107 relevant articles into four scientific discourses: the normative, the interpretive, the critical, and the dialogical discourses, as formulated by Deetz (1996). We find that the normative discourse dominates the IT PPM...... to articulate and discuss underlying and conflicting assumptions in IT PPM, serving as a basis for adjusting organizations’ IT PPM practices. Keywords: IT project portfolio management or IT PPM, literature review, scientific discourses, underlying assumptions, unintended consequences, epistemological biases......DISCOURSES AND THEORETICAL ASSUMPTIONS IN IT PROJECT PORTFOLIO MANAGEMENT: A REVIEW OF THE LITERATURE These years increasing interest is put on IT project portfolio management (IT PPM). Considering IT PPM an interdisciplinary practice, we conduct a concept-based literature review of relevant...

  5. On the Necessary and Sufficient Assumptions for UC Computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Nielsen, Jesper Buus; Orlandi, Claudio

    2010-01-01

    -transfer protocol for the stand-alone model. Since a KRA where the secret keys can be computed from the public keys is useless, and some setup assumption is needed for UC secure computation, this establishes the best we could hope for the KRA model: any non-trivial KRA is sufficient for UC computation. •  We show......We study the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions. We look at the common reference string model, the uniform random string model and the key-registration authority model (KRA), and provide new results...... for all of them. Perhaps most interestingly we show that: •  For even the minimal meaningful KRA, where we only assume that the secret key is a value which is hard to compute from the public key, one can UC securely compute any poly-time functionality if there exists a passive secure oblivious...

  6. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  7. Evaluating The Markov Assumption For Web Usage Mining

    DEFF Research Database (Denmark)

    Jespersen, S.; Pedersen, Torben Bach; Thorhauge, J.

    2003-01-01

    Web usage mining concerns the discovery of common browsing patterns, i.e., pages requested in sequence, from web logs. To cope with the enormous amounts of data, several aggregated structures based on statistical models of web surfing have appeared, e.g., the Hypertext Probabilistic Grammar (HPG......) model~\\cite{borges99data}. These techniques typically rely on the \\textit{Markov assumption with history depth} $n$, i.e., it is assumed that the next requested page is only dependent on the last $n$ pages visited. This is not always valid, i.e. false browsing patterns may be discovered. However, to our...... knowledge there has been no systematic study of the validity of the Markov assumption wrt.\\ web usage mining and the resulting quality of the mined browsing patterns. In this paper we systematically investigate the quality of browsing patterns mined from structures based on the Markov assumption. Formal...

  8. Evolution of Requirements and Assumptions for Future Exploration Missions

    Science.gov (United States)

    Anderson, Molly; Sargusingh, Miriam; Perry, Jay

    2017-01-01

    NASA programs are maturing technologies, systems, and architectures to enabling future exploration missions. To increase fidelity as technologies mature, developers must make assumptions that represent the requirements of a future program. Multiple efforts have begun to define these requirements, including team internal assumptions, planning system integration for early demonstrations, and discussions between international partners planning future collaborations. For many detailed life support system requirements, existing NASA documents set limits of acceptable values, but a future vehicle may be constrained in other ways, and select a limited range of conditions. Other requirements are effectively set by interfaces or operations, and may be different for the same technology depending on whether the hard-ware is a demonstration system on the International Space Station, or a critical component of a future vehicle. This paper highlights key assumptions representing potential life support requirements and explanations of the driving scenarios, constraints, or other issues that drive them.

  9. Numerical investigation of full scale coal combustion model of tangentially fired boiler with the effect of mill ducting

    Science.gov (United States)

    Achim, Daniela; Naser, J.; Morsi, Y. S.; Pascoe, S.

    2009-11-01

    In this paper a full scale combustion model incorporating upstream mill ducting of a large tangentially fired boiler with flue gas recirculation was examined numerically. Lagrangian particle tracking was used to determine the coal particle paths and the Eddy Dissipation Model for the analysis of the gas phase combustion. Moreover volatiles and gaseous char products, given off by the coal particles were modelled by Arrhenius single phase reactions and a transport equation was solved for each material given off by the particles. Thermal, prompt, fuel and reburn NO x models with presumed probability density functions were used to model NO x production and the discrete transfer radiation model was used to model radiation heat transfer. Generally, the findings indicated reasonable agreement with observed qualitative and quantitative data of incident heat flux on the walls. The model developed here could be used for a range of applications in furnace design and optimisation of gas emissions of coal fired boiler plants.

  10. Large-scale purification of pharmaceutical-grade plasmid DNA using tangential flow filtration and multi-step chromatography.

    Science.gov (United States)

    Sun, Bo; Yu, XiangHui; Yin, Yuhe; Liu, Xintao; Wu, Yongge; Chen, Yan; Zhang, Xizhen; Jiang, Chunlai; Kong, Wei

    2013-09-01

    The demand for pharmaceutical-grade plasmid DNA in vaccine applications and gene therapy has been increasing in recent years. In the present study, a process consisting of alkaline lysis, tangential flow filtration, purification by anion exchange chromatography, hydrophobic interaction chromatography and size exclusion chromatography was developed. The final product met the requirements for pharmaceutical-grade plasmid DNA. The chromosomal DNA content was RNA was not detectable by agarose gel electrophoresis. Moreover, the protein content was bacterial cell paste. The overall yield of the final plasmid DNA reached 48%. Therefore, we have established a rapid and efficient production process for pharmaceutical-grade plasmid DNA. Copyright © 2013 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  11. Changing Assumptions and Progressive Change in Theories of Strategic Organization

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Hallberg, Niklas L.

    2017-01-01

    A commonly held view is that strategic organization theories progress as a result of a Popperian process of bold conjectures and systematic refutations. However, our field also witnesses vibrant debates or disputes about the specific assumptions that our theories rely on, and although these debates...... are often decoupled from the results of empirical testing, changes in assumptions seem closely intertwined with theoretical progress. Using the case of the resource-based view, we suggest that progressive change in theories of strategic organization may come about as a result of scholarly debate and dispute...

  12. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    , propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  13. Models for waste life cycle assessment: Review of technical assumptions

    DEFF Research Database (Denmark)

    Gentil, Emmanuel; Damgaard, Anders; Hauschild, Michael Zwicky

    2010-01-01

    waste LCA models. This review infers that some of the differences in waste LCA models are inherent to the time they were developed. It is expected that models developed later, benefit from past modelling assumptions and knowledge and issues. Models developed in different countries furthermore rely...

  14. Does Artificial Neural Network Support Connectivism's Assumptions?

    Science.gov (United States)

    AlDahdouh, Alaa A.

    2017-01-01

    Connectivism was presented as a learning theory for the digital age and connectivists claim that recent developments in Artificial Intelligence (AI) and, more specifically, Artificial Neural Network (ANN) support their assumptions of knowledge connectivity. Yet, very little has been done to investigate this brave allegation. Does the advancement…

  15. Exploring five common assumptions on Attention Deficit Hyperactivity Disorder

    NARCIS (Netherlands)

    Batstra, Laura; Nieweg, Edo H.; Hadders-Algra, Mijna

    The number of children diagnosed with attention deficit hyperactivity disorder (ADHD) and treated with medication is steadily increasing. The aim of this paper was to critically discuss five debatable assumptions on ADHD that may explain these trends to some extent. These are that ADHD (i) causes

  16. Judgment: Deductive Logic and Assumption Recognition: Grades 7-12.

    Science.gov (United States)

    Instructional Objectives Exchange, Los Angeles, CA.

    This collection of objectives and related measures deals with one side of judgment: deductive logic and assumption recognition. They are suggestive of students' ability to make judgments based on logical analysis rather than comprehensive indices of overall capacity for judgment. They include Conditional Reasoning Index, Class Reasoning Index,…

  17. Child Development Knowledge and Teacher Preparation: Confronting Assumptions.

    Science.gov (United States)

    Katz, Lilian G.

    This paper questions the widely held assumption that acquiring knowledge of child development is an essential part of teacher preparation and teaching competence, especially among teachers of young children. After discussing the influence of culture, parenting style, and teaching style on developmental expectations and outcomes, the paper asserts…

  18. Observing gravitational-wave transient GW150914 with minimal assumptions

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwa, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. C.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brocki, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderon Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chatterji, S.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. E.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, M.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, A.L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; DeRosa, R. T.; Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M.G.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. R.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritsche, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; de Haas, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinder, I.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijhunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Laguna, P.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, R.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mende, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Page, J.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prolchorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, P.S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shithriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlhruch, H.; Vajente, G.; Valdes, G.; Van Bakel, N.; Van Beuzekom, Martin; Van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, R. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-01-01

    The gravitational-wave signal GW150914 was first identified on September 14, 2015, by searches for short-duration gravitational-wave transients. These searches identify time-correlated transients in multiple detectors with minimal assumptions about the signal morphology, allowing them to be

  19. Origins and Traditions in Comparative Education: Challenging Some Assumptions

    Science.gov (United States)

    Manzon, Maria

    2018-01-01

    This article questions some of our assumptions about the history of comparative education. It explores new scholarship on key actors and ways of knowing in the field. Building on the theory of the social constructedness of the field of comparative education, the paper elucidates how power shapes our scholarly histories and identities.

  20. Questioning Engelhardt's assumptions in Bioethics and Secular Humanism.

    Science.gov (United States)

    Ahmadi Nasab Emran, Shahram

    2016-06-01

    In Bioethics and Secular Humanism: The Search for a Common Morality, Tristram Engelhardt examines various possibilities of finding common ground for moral discourse among people from different traditions and concludes their futility. In this paper I will argue that many of the assumptions on which Engelhardt bases his conclusion about the impossibility of a content-full secular bioethics are problematic. By starting with the notion of moral strangers, there is no possibility, by definition, for a content-full moral discourse among moral strangers. It means that there is circularity in starting the inquiry with a definition of moral strangers, which implies that they do not share enough moral background or commitment to an authority to allow for reaching a moral agreement, and concluding that content-full morality is impossible among moral strangers. I argue that assuming traditions as solid and immutable structures that insulate people across their boundaries is problematic. Another questionable assumption in Engelhardt's work is the idea that religious and philosophical traditions provide content-full moralities. As the cardinal assumption in Engelhardt's review of the various alternatives for a content-full moral discourse among moral strangers, I analyze his foundationalist account of moral reasoning and knowledge and indicate the possibility of other ways of moral knowledge, besides the foundationalist one. Then, I examine Engelhardt's view concerning the futility of attempts at justifying a content-full secular bioethics, and indicate how the assumptions have shaped Engelhardt's critique of the alternatives for the possibility of content-full secular bioethics.

  1. Relaxing the zero-sum assumption in neutral biodiversity theory

    NARCIS (Netherlands)

    Haegeman, Bart; Etienne, Rampal S.

    2008-01-01

    The zero-sum assumption is one of the ingredients of the standard neutral model of biodiversity by Hubbell. It states that the community is saturated all the time, which in this model means that the total number of individuals in the community is constant over time, and therefore introduces a

  2. Distributed automata in an assumption-commitment framework

    Indian Academy of Sciences (India)

    We model examples like reliable bit transmission and sequence transmission protocols in this framework and discuss how assumption-commitment structure facilitates compositional design of such protocols. We prove a decomposition theorem which states that every protocol specified globally as a finite state system can ...

  3. Seven Assumptions of a Solution-Focused Conversational Leader.

    Science.gov (United States)

    Paull, Robert C.; McGrevin, Carol Z.

    1996-01-01

    Effective psychologists and school leaders know how to manage conversations to help clients or stakeholders move toward solutions. This article presents the assumptions of solution-focused brief therapy in a school leadership context. Key components are focusing on solutions, finding exceptions, identifying changes, starting small, listening to…

  4. The Metatheoretical Assumptions of Literacy Engagement: A Preliminary Centennial History

    Science.gov (United States)

    Hruby, George G.; Burns, Leslie D.; Botzakis, Stergios; Groenke, Susan L.; Hall, Leigh A.; Laughter, Judson; Allington, Richard L.

    2016-01-01

    In this review of literacy education research in North America over the past century, the authors examined the historical succession of theoretical frameworks on students' active participation in their own literacy learning, and in particular the metatheoretical assumptions that justify those frameworks. The authors used "motivation" and…

  5. Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative

    Science.gov (United States)

    Ahmed, Abdelhamid

    2008-01-01

    The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…

  6. Posttraumatic Growth and Shattered World Assumptions Among Ex-POWs

    DEFF Research Database (Denmark)

    Lahav, Y.; Bellin, Elisheva S.; Solomon, Z.

    2016-01-01

    world assumptions (WAs) and that the co-occurrence of high PTG and negative WAs among trauma survivors reflects reconstruction of an integrative belief system. The present study aimed to test these claims by investigating, for the first time, the mediating role of dissociation in the relation between...

  7. Deep Borehole Field Test Requirements and Controlled Assumptions.

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  8. 7 CFR 1980.476 - Transfer and assumptions.

    Science.gov (United States)

    2010-01-01

    ...) PROGRAM REGULATIONS (CONTINUED) GENERAL Business and Industrial Loan Program § 1980.476 Transfer and... give to secure the debt, will be adequate to secure the balance of the total guaranteed loan owed, plus... assumption provisions if the guaranteed loan debt balance is within his/her individual loan approval...

  9. Full-wave analysis using a tangential vector finite-element formulation of arbitrary cross-section transmission lines for millimeter and microwave applications

    Science.gov (United States)

    Helal, M.; Legier, J. F.; Pribetich, P.; Kennis, P.

    1994-06-01

    A tangential vector finite-element formulation is implemented to deal with arbitrary cross section and metallic strip shape. Classical planar transmission lines as well as nonconventional cross-section waveguides such as the new microshield line are treated. Effects on propagation characteristics for these lines are studied when the metallization shape is approximated by a lossy trapezoid area.

  10. Bion, basic assumptions, and violence: a corrective reappraisal.

    Science.gov (United States)

    Roth, Bennett

    2013-10-01

    Group psychoanalytic theory rests on many of the same psychoanalytic assumptions as individual psychoanalytic theory but has been slow in developing its own language and unique understanding of conflict within the group, as many group phenomena are not the same as individual psychic events. Regressive fantasies and alliances within and to the group are determined by group composition and the interaction of fantasies among members and leader. Bion's useful but incomplete early abstract formulation of psychic regression in groups was the initial attempt to move beyond Freud's largely sociological view. This paper explores some of the origins of Bion's neglect of murderous violence in groups as a result of his own experiences in the first European war. In the following, I present evidence for the existence of a violent basic assumption and offer evidence as to Bion's avoidance of murderous and violent acts.

  11. The sufficiency assumption of the reasoned approach to action

    Directory of Open Access Journals (Sweden)

    David Trafimow

    2015-12-01

    Full Text Available The reasoned action approach to understanding and predicting behavior includes the sufficiency assumption. Although variables not included in the theory may influence behavior, these variables work through the variables in the theory. Once the reasoned action variables are included in an analysis, the inclusion of other variables will not increase the variance accounted for in behavioral intentions or behavior. Reasoned action researchers are very concerned with testing if new variables account for variance (or how much traditional variables account for variance, to see whether they are important, in general or with respect to specific behaviors under investigation. But this approach tacitly assumes that accounting for variance is highly relevant to understanding the production of variance, which is what really is at issue. Based on the variance law, I question this assumption.

  12. Data-driven smooth tests of the proportional hazards assumption

    Czech Academy of Sciences Publication Activity Database

    Kraus, David

    2007-01-01

    Roč. 13, č. 1 (2007), s. 1-16 ISSN 1380-7870 R&D Projects: GA AV ČR(CZ) IAA101120604; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * Neyman's smooth test * proportional hazards assumption * Schwarz's selection rule Subject RIV: BA - General Mathematics Impact factor: 0.491, year: 2007

  13. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. About tests of the "simplifying" assumption for conditional copulas

    OpenAIRE

    Derumigny, Alexis; Fermanian, Jean-David

    2016-01-01

    We discuss the so-called “simplifying assumption” of conditional copulas in a general framework. We introduce several tests of the latter assumption for non- and semiparametric copula models. Some related test procedures based on conditioning subsets instead of point-wise events are proposed. The limiting distributions of such test statistics under the null are approximated by several bootstrap schemes, most of them being new. We prove the validity of a particular semiparametric bootstrap sch...

  15. Assumptions behind size-based ecosystem models are realistic

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Blanchard, Julia L.; Fulton, Elizabeth A.

    2016-01-01

    A recent publication about balanced harvesting (Froese et al., ICES Journal of Marine Science; doi:10.1093/icesjms/fsv122) contains several erroneous statements about size-spectrum models. We refute the statements by showing that the assumptions pertaining to size-spectrum models discussed by Fro...... that there is indeed a constructive role for a wide suite of ecosystem models to evaluate fishing strategies in an ecosystem context...

  16. Bank stress testing under different balance sheet assumptions

    OpenAIRE

    Busch, Ramona; Drescher, Christian; Memmel, Christoph

    2017-01-01

    Using unique supervisory survey data on the impact of a hypothetical interest rate shock on German banks, we analyse price and quantity effects on banks' net interest margin components under different balance sheet assumptions. In the first year, the cross-sectional variation of banks' simulated price effect is nearly eight times as large as the one of the simulated quantity effect. After five years, however, the importance of both effects converges. Large banks adjust their balance sheets mo...

  17. Tangential Volumetric Modulated Radiotherapy - A New Technique for Large Scalp Lesions with a Case Study in Lentigo Maligna

    Directory of Open Access Journals (Sweden)

    E. Daniel Santos

    2015-06-01

    Full Text Available Introduction: Dose homogeneity within and dose conformity to the target volume can be a challenge to achieve when treating large area scalp lesions. Traditionally High Dose Rate (HDR brachytherapy (BT scalp moulds have been considered the ultimate conformal therapy. We have developed a new technique, Tangential Volumetric Modulated Arc Therapy (TVMAT that treats with the beam tangential to the surface of the scalp. In the TVMAT plan the collimating jaws protect dose-sensitive tissue in close proximity to the planning target volume (PTV. Not all the PTV is within the beam aperture as defined by the jaws during all the beam-on time. We report the successful treatment of one patient. Methods: A patient with biopsy proven extensive lentigo maligna on the scalp was simulated and three plans were created; one using a HDR brachytherapy surface mould, another using a conventional VMAT technique and a third using our new TVMAT technique. The patient was prescribed 55 Gy in 25 fractions. Plans were optimised so that PTV V100% = 100%. Plans were compared using Dose-Value Histogram (DVH analysis, and homogeneity and conformity indices. Results: BT, VMAT and TVMAT PTV median coverage was 105.51%, 103.46% and 103.62%, with homogeneity index of 0.33, 0.07 and 0.07 and the conformity index of 0.30, 0.69 and 0.83 respectively. The median dose to the left hippocampus was 11.8 Gy, 9.0 Gy and 0.6 Gy and the median dose to the right hippocampus was 12.6 Gy, 9.4 Gy and 0.7 Gy for the BT, VMAT and TVMAT respectively. Overall TVMAT delivered the least doses to the surrounding organs, BT delivered the highest. Conclusions: TVMAT was superior to VMAT which was in turn superior to BT in PTV coverage, conformity and homogeneity and delivery of dose to the surrounding organs at risk. The patient was successfully treated to full dose with TVMAT. TVMAT was verified as being the best amongst the three techniques in a second patient.

  18. Numerical investigation on the flow, combustion, and NOX emission characteristics in a 660 MWe tangential firing ultra-supercritical boiler

    Directory of Open Access Journals (Sweden)

    Wenjing Sun

    2016-02-01

    Full Text Available A three-dimensional numerical simulation was carried out to study the pulverized-coal combustion process in a tangentially fired ultra-supercritical boiler. The realizable k-ε model for gas coupled with discrete phase model for coal particles, P-1 radiation model for radiation, two-competing-rates model for devolatilization, and kinetics/diffusion-limited model for combustion process are considered. The characteristics of the flow field, particle motion, temperature distribution, species components, and NOx emissions were numerically investigated. The good agreement of the measurements and predictions implies that the applied simulation models are appropriate for modeling commercial-scale coal boilers. It is found that an ideal turbulent flow and particle trajectory can be observed in this unconventional pulverized-coal furnace. With the application of over-fire air and additional air, lean-oxygen combustion takes place near the burner sets region and higher temperature at furnace exit is acquired for better heat transfer. Within the limits of secondary air, more steady combustion process is achieved as well as the reduction of NOx. Furthermore, the influences of the secondary air, over-fire air, and additional air on the NOx emissions are obtained. The numerical results reveal that NOx formation attenuates with the decrease in the secondary air ratio (γ2nd and the ratio of the additional air to the over-fire air (γAA/γOFA was within the limits.

  19. Extended semi-analytical model for the prediction of flow and concentration fields in a tangentially-fired furnace

    Directory of Open Access Journals (Sweden)

    Lotfiani Amin

    2013-01-01

    Full Text Available Tangentially-fired furnaces (TFF are one of the modified types of furnaces which have become more attractive in the field of industrial firing systems in recent years. Multi-zone thermodynamic models can be used to study the effect of different parameters on the operation of TFF readily and economically. Flow and mixing sub-model is a necessity in multi-zone models. In the present work, the semi-analytical model previously established by the authors for the prediction of the behavior of coaxial turbulent gaseous jets is extended to be used in a single-chamber TFF with square horizontal cross-sections and to form the flow and mixing sub-model of the future multi-zone model for the simulation of this TFF. A computer program is developed to implement the new extended model. Computational fluid dynamics (CFD simulations are carried out to validate the results of the new model. In order to verify the CFD solution procedure, a turbulent round jet injected into cross flow is simulated. The calculated jet trajectory and velocity profile are compared with other experimental and numerical data and good agreement is observed. Results show that the present model can provide very fast and reasonable predictions of the flow and concentration fields in the TFF of interest.

  20. Tangential vs. defined radiotherapy in early breast cancer treatment without axillary lymph node dissection. A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Nitsche, Mirko [Zentrum fuer Strahlentherapie und Radioonkologie, Bremen (Germany); Universitaet Kiel, Klinik fuer Strahlentherapie, Karl-Lennert-Krebscentrum, Kiel (Germany); Temme, Nils; Foerster, Manuela; Reible, Michael [Zentrum fuer Strahlentherapie und Radioonkologie, Bremen (Germany); Hermann, Robert Michael [Zentrum fuer Strahlentherapie und Radioonkologie, Bremen (Germany); Medizinische Hochschule Hannover, Abteilung Strahlentherapie und Spezielle Onkologie, Hannover (Germany)

    2014-08-15

    Recent studies have demonstrated low regional recurrence rates in early-stage breast cancer omitting axillary lymph node dissection (ALND) in patients who have positive nodes in sentinel lymph node dissection (SLND). This finding has triggered an active discussion about the effect of radiotherapy within this approach. The purpose of this study was to analyze the dose distribution in the axilla in standard tangential radiotherapy (SRT) for breast cancer and the effects on normal tissue exposure when anatomic level I-III axillary lymph node areas are included in the tangential radiotherapy field configuration. We prospectively analyzed the dosimetric treatment plans from 51 consecutive women with early-stage breast cancer undergoing radiotherapy. We compared and analyzed the SRT and the defined radiotherapy (DRT) methods for each patient. The clinical target volume (CTV) of SRT included the breast tissue without specific contouring of lymph node areas, whereas the CTV of DRT included the level I-III lymph node areas. We evaluated the dose given in SRT covering the axillary lymph node areas of level I-III as contoured in DRT. The mean V{sub D95} {sub %} of the entire level I-III lymph node area in SRT was 50.28 % (range, 37.31-63.24 %), V{sub D45} {sub Gy} was 70.1 % (54.8-85.4 %), and V{sub D40} {sub Gy} was 83.5 % (72.3-94.8 %). A significant difference was observed between lung dose and heart toxicity in SRT vs. DRT. The V{sub 20} {sub Gy} and V{sub 30} {sub Gy} of the right and the left lung in DRT were significantly higher in DRT than in SRT (p < 0.001). The mean heart dose in SRT was significantly lower (3.93 vs. 4.72 Gy, p = 0.005). We demonstrated a relevant dose exposure of the axilla in SRT that should substantially reduce local recurrences. Furthermore, we demonstrated a significant increase in lung and heart exposure when including the axillary lymph nodes regions in the tangential radiotherapy field set-up. (orig.) [German] Aktuelle Studien zeigen

  1. The Field Radiated by a Ring Quasi-Array of an Infinite Number of Tangential or Radial Dipoles

    DEFF Research Database (Denmark)

    Knudsen, H. L.

    1953-01-01

    A homogeneous ring array of axial dipoles will radiate a vertically polarized field that concentrates to an increasing degree around the horizontal plane with increasing increment of the current phase per revolution. There is reason to believe that by using a corresponding antenna system with tan......A homogeneous ring array of axial dipoles will radiate a vertically polarized field that concentrates to an increasing degree around the horizontal plane with increasing increment of the current phase per revolution. There is reason to believe that by using a corresponding antenna system...... with tangential or radial dipoles, a field may be obtained that has a similar useful structure as the above-mentioned ring array, but which in contrast to the latter is essentially horizontally polarized. In this paper a systematic investigation has been made of the field from such an antenna system...... of the antenna systems treated here converges towards infinity, too, i.e. supergain occurs. Based on the theory of supergain an approximate expression has been derived for the minimum value of the radius of the antenna system, which it is possible to use in practice....

  2. Modern Hypofractionation Schedules for Tangential Whole Breast Irradiation Decrease the Fraction Size-corrected Dose to the Heart

    DEFF Research Database (Denmark)

    Appelt, Ane L; Vogelius, Ivan R; Bentzen, Søren M

    2013-01-01

    -corrected dose to the heart for four evidence-based hypofractionation regimens. Materials and methods: Dose plans for 60 left-sided breast cancer patients were analysed. All patients were planned with tangential fields for whole breast irradiation. Dose distributions were corrected to the equivalent dose in 2 Gy...... fractions (EQD(2)) using the linear quadratic model for five different fractionation schedules (50 Gy/25 fractions and four hypofractionation regimens) and for a range of alpha/beta. values (0-5 Gy). The mean EQD(2) to the heart (D-mean(EQD2)) and the volume receiving 40 Gy (V-40(Gy)EQD2), both...... as calculated from the EQD(2) dose distributions, were compared between schedules. Results: For alpha/beta = 3 Gy, V-40(Gy)EQD2 favours hypofractionation for 40 Gy/15 fractions, 39 Gy/13 fractions and 42.5 Gy/16 fractions, but not for 41.6 Gy/13 fractions. All of the hypofractionation schedules result in lower...

  3. THE COMPLEX OF ASSUMPTION CATHEDRAL OF THE ASTRAKHAN KREMLIN

    Directory of Open Access Journals (Sweden)

    Savenkova Aleksandra Igorevna

    2016-08-01

    Full Text Available This article is devoted to an architectural and historical analysis of the constructions forming a complex of Assumption Cathedral of the Astrakhan Kremlin, which earlier hasn’t been considered as a subject of special research. Basing on the archival sources, photographic materials, publications and on-site investigations of monuments, the creation history of the complete architectural complex sustained in one style of the Muscovite baroque, unique in its composite construction, is considered. Its interpretation in the all-Russian architectural context is offered. Typological features of single constructions come to light. The typology of the Prechistinsky bell tower has an untypical architectural solution - “hexagonal structure on octagonal and quadrangular structures”. The way of connecting the building of the Cathedral and the chambers by the passage was characteristic of monastic constructions and was exclusively seldom in kremlins, farmsteads and ensembles of city cathedrals. The composite scheme of the Assumption Cathedral includes the Lobnoye Mesto (“the Place of Execution” located on an axis from the West, it is connected with the main building by a quarter-turn with landing. The only prototype of the structure is a Lobnoye Mesto on the Red Square in Moscow. In the article the version about the emergence of the Place of Execution on the basis of earlier existing construction - a tower “the Peal” which is repeatedly mentioned in written sources in connection with S. Razin’s revolt is considered. The metropolitan Sampson, trying to keep the value of the Astrakhan metropolitanate, builds the Assumption Cathedral and the Place of Execution directly appealing to a capital prototype to emphasize the continuity and close connection with Moscow.

  4. HYPROLOG: A New Logic Programming Language with Assumptions and Abduction

    DEFF Research Database (Denmark)

    Christiansen, Henning; Dahl, Veronica

    2005-01-01

    . The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...... with the grammar notation provided by the underlying Prolog system. An operational semantics is given which complies with standard declarative semantics for the ``pure'' sublanguages, while for the full HYPROLOG language, it must be taken as definition. The implementation is straightforward and seems to provide...

  5. Radiation hormesis and the linear-no-threshold assumption

    CERN Document Server

    Sanders, Charles L

    2009-01-01

    Current radiation protection standards are based upon the application of the linear no-threshold (LNT) assumption, which considers that even very low doses of ionizing radiation can cause cancer. The radiation hormesis hypothesis, by contrast, proposes that low-dose ionizing radiation is beneficial. In this book, the author examines all facets of radiation hormesis in detail, including the history of the concept and mechanisms, and presents comprehensive, up-to-date reviews for major cancer types. It is explained how low-dose radiation can in fact decrease all-cause and all-cancer mortality an

  6. The extended evolutionary synthesis: its structure, assumptions and predictions.

    Science.gov (United States)

    Laland, Kevin N; Uller, Tobias; Feldman, Marcus W; Sterelny, Kim; Müller, Gerd B; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-08-22

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the 'extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism-environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. © 2015 The Author(s).

  7. Halo-Independent Direct Detection Analyses Without Mass Assumptions

    CERN Document Server

    Anderson, Adam J.; Kahn, Yonatan; McCullough, Matthew

    2015-10-06

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the $m_\\chi-\\sigma_n$ plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the $v_{min}-\\tilde{g}$ plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from $v_{min}$ to nuclear recoil momentum ($p_R$), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call $\\tilde{h}(p_R)$. The entire family of conventional halo-independent $\\tilde{g}(v_{min})$ plots for all DM masses are directly found from the single $\\tilde{h}(p_R)$ plot through a simple re...

  8. Basic concepts and assumptions behind the new ICRP recommendations

    International Nuclear Information System (INIS)

    Lindell, B.

    1979-01-01

    A review is given of some of the basic concepts and assumptions behind the current recommendations by the International Commission on Radiological Protection in ICRP Publications 26 and 28, which form the basis for the revision of the Basic Safety Standards jointly undertaken by IAEA, ILO, NEA and WHO. Special attention is given to the assumption of a linear, non-threshold dose-response relationship for stochastic radiation effects such as cancer and hereditary harm. The three basic principles of protection are discussed: justification of practice, optimization of protection and individual risk limitation. In the new ICRP recommendations particular emphasis is given to the principle of keeping all radiation doses as low as is reasonably achievable. A consequence of this is that the ICRP dose limits are now given as boundary conditions for the justification and optimization procedures rather than as values that should be used for purposes of planning and design. The fractional increase in total risk at various ages after continuous exposure near the dose limits is given as an illustration. The need for taking other sources, present and future, into account when applying the dose limits leads to the use of the commitment concept. This is briefly discussed as well as the new quantity, the effective dose equivalent, introduced by ICRP. (author)

  9. The contour method cutting assumption: error minimization and correction

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  10. The extended evolutionary synthesis: its structure, assumptions and predictions

    Science.gov (United States)

    Laland, Kevin N.; Uller, Tobias; Feldman, Marcus W.; Sterelny, Kim; Müller, Gerd B.; Moczek, Armin; Jablonka, Eva; Odling-Smee, John

    2015-01-01

    Scientific activities take place within the structured sets of ideas and assumptions that define a field and its practices. The conceptual framework of evolutionary biology emerged with the Modern Synthesis in the early twentieth century and has since expanded into a highly successful research program to explore the processes of diversification and adaptation. Nonetheless, the ability of that framework satisfactorily to accommodate the rapid advances in developmental biology, genomics and ecology has been questioned. We review some of these arguments, focusing on literatures (evo-devo, developmental plasticity, inclusive inheritance and niche construction) whose implications for evolution can be interpreted in two ways—one that preserves the internal structure of contemporary evolutionary theory and one that points towards an alternative conceptual framework. The latter, which we label the ‘extended evolutionary synthesis' (EES), retains the fundaments of evolutionary theory, but differs in its emphasis on the role of constructive processes in development and evolution, and reciprocal portrayals of causation. In the EES, developmental processes, operating through developmental bias, inclusive inheritance and niche construction, share responsibility for the direction and rate of evolution, the origin of character variation and organism–environment complementarity. We spell out the structure, core assumptions and novel predictions of the EES, and show how it can be deployed to stimulate and advance research in those fields that study or use evolutionary biology. PMID:26246559

  11. DDH-like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike

    2011-01-01

    We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of F_{q} ``in the exponent'' of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring R_f= \\F...... DDH, is easy in bilinear groups. This motivates our suggestion of a different type of assumption, the d-vector DDH problems (VDDH), which are based on f(X)= X^d, but with a twist to avoid the problems with reducible polynomials. We show in the generic group model that VDDH is hard in bilinear groups...... and that in fact the problems become harder with increasing d and hence form an infinite hierarchy. We show that hardness of VDDH implies CCA-secure encryption, efficient Naor-Reingold style pseudorandom functions, and auxiliary input secure encryption, a strong form of leakage resilience. This can be seen...

  12. DDH-Like Assumptions Based on Extension Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Kiltz, Eike

    2012-01-01

    and security proof but get better security and moreover, the amortized complexity (e.g, computation per encrypted bit) is the same as when using DDH. We also show that d-DDH, just like DDH, is easy in bilinear groups. We therefore suggest a different type of assumption, the d-vector DDH problems (d......We introduce and study a new type of DDH-like assumptions based on groups of prime order q. Whereas standard DDH is based on encoding elements of $\\mathbb{F}_{q}$ “in the exponent” of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring $R......-VDDH), which are based on f(X) = Xd, but with a twist to avoid problems with reducible polynomials. We show in the generic group model that d-VDDH is hard in bilinear groups and that the problems become harder with increasing d. We show that hardness of d-VDDH implies CCA-secure encryption, efficient Naor...

  13. Estimation of cold extremes and the identical distribution assumption

    Science.gov (United States)

    Parey, Sylvie

    2016-04-01

    Extreme, generally not observed, values of meteorological (or other) hazards are estimated by use of observed time series and application of the statistical extreme value theory. This theory is based on the essential assumption that the events are independent and identically distributed. This assumption is generally not verified for meteorological hazards, firstly because these phenomena are seasonal, and secondly because climate change may induce temporal trends. These issues can be dealt with, by selecting the season of occurrence or handling trends in the extreme distribution parameters for example. When recently updating extreme cold temperatures, we faced different rather new difficulties: the threshold choice, when applying the Peak Over Threshold (POT) approach happened to be exceptionally difficult, and when applying block maxima, different block sizes could lead to significantly different return levels. A more detailed analysis of the exceedances of different cold thresholds showed that when the threshold becomes more extreme, the exceedances are not identically distributed across the years. This behaviour could have been related to the preferred phase of the North Atlantic Oscillation (NAO) during each winter, and the return level estimation has then been based on a sub-sampling between negative and positive NAO winters. The approach and the return level estimation from the sub-samples will be illustrated with an example.

  14. Economic Growth Assumptions in Climate and Energy Policy

    Directory of Open Access Journals (Sweden)

    Nir Y. Krakauer

    2014-03-01

    Full Text Available The assumption that the economic growth seen in recent decades will continue has dominated the discussion of future greenhouse gas emissions and the mitigation of and adaptation to climate change. Given that long-term economic growth is uncertain, the impacts of a wide range of growth trajectories should be considered. In particular, slower economic growth would imply that future generations will be relatively less able to invest in emissions controls or adapt to the detrimental impacts of climate change. Taking into consideration the possibility of economic slowdown therefore heightens the urgency of reducing greenhouse gas emissions now by moving to renewable energy sources, even if this incurs short-term economic cost. I quantify this counterintuitive impact of economic growth assumptions on present-day policy decisions in a simple global economy-climate model (Dynamic Integrated model of Climate and the Economy (DICE. In DICE, slow future growth increases the economically optimal present-day carbon tax rate and the utility of taxing carbon emissions, although the magnitude of the increase is sensitive to model parameters, including the rate of social time preference and the elasticity of the marginal utility of consumption. Future scenario development should specifically include low-growth scenarios, and the possibility of low-growth economic trajectories should be taken into account in climate policy analyses.

  15. Bogen's Critique of Linear-No-Threshold Default Assumptions.

    Science.gov (United States)

    Crump, Kenny S

    2017-10-01

    In an article recently published in this journal, Bogen (1) concluded that an NRC committee's recommendations that default linear, nonthreshold (LNT) assumptions be applied to dose- response assessment for noncarcinogens and nonlinear mode of action carcinogens are not justified. Bogen criticized two arguments used by the committee for LNT: when any new dose adds to a background dose that explains background levels of risk (additivity to background or AB), or when there is substantial interindividual heterogeneity in susceptibility (SIH) in the exposed human population. Bogen showed by examples that SIH can be false. Herein is outlined a general proof that confirms Bogen's claim. However, it is also noted that SIH leads to a nonthreshold population distribution even if individual distributions all have thresholds, and that small changes to SIH assumptions can result in LNT. Bogen criticizes AB because it only applies when there is additivity to background, but offers no help in deciding when or how often AB holds. Bogen does not contradict the fact that AB can lead to LNT but notes that, even if low-dose linearity results, the response at higher doses may not be useful in predicting the amount of low-dose linearity. Although this is theoretically true, it seems reasonable to assume that generally there is some quantitative relationship between the low-dose slope and the slope suggested at higher doses. Several incorrect or misleading statements by Bogen are noted. © 2016 Society for Risk Analysis.

  16. Comparing Dorsal Tangential and Lateral Views of the Wrist for Detecting Dorsal Screw Penetration after Volar Plating of Distal Radius Fractures

    OpenAIRE

    Giugale, Juan M.; Fourman, Mitchell S.; Bielicka, Deidre L.; Fowler, John R.

    2017-01-01

    Background. The dorsal tangential (DT) view has been shown to improve the detection of dorsal screw perforation during volar distal radius fracture fixation. Here we performed a cadaveric imaging survey study to evaluate if the DT view was uniformly beneficial for all screws. Methods. Standardized placement of fixed-angle volar distal radius plates was performed on two cadavers. Fluoroscopic images depicting variable screw perforation of each of the four screw holes on the plate were generate...

  17. Postmastectomy radiotherapy of the chest wall. Comparison of electron-rotation technique and common tangential photon fields

    International Nuclear Information System (INIS)

    Hehr, T.; Classen, J.; Huth, M.; Durst, I.; Bamberg, M.; Budach, W.; Christ, G.

    2004-01-01

    electron-rotation technique (LRC 92%) or with the photon-based technique (LRC 89%; p = 0.9). A subgroup analysis of tumors resected with ''close margins'' showed a higher LRF rate of 25% after electron-beam-rotation irradiation (n = 180) compared to an LRF of 13% with tangential opposed 6-MV photon fields (n = 107; p < 0.05). Large primary tumors of ≥ 5 cm developed LRF in 29% of patients treated with electron-beam-rotation irradiation and in 17% of patients with photon-based irradiation (p = 0.1). Conclusion: in locally advanced breast cancer, the LRC after postmastectomy irradiation with both techniques is comparable with published data from randomized studies. The tangential opposed photon field technique seems to be beneficial after marginal resection (histopathologic ''close margins'') of the primary tumor. (orig.)

  18. Quantitative assessment of irradiated lung volume and lung mass in breast cancer patients treated with tangential fields in combination with deep inspiration breath hold (DIBH)

    International Nuclear Information System (INIS)

    Kapp, Karin Sigrid; Zurl, Brigitte; Stranzl, Heidi; Winkler, Peter

    2010-01-01

    Purpose: Comparison of the amount of irradiated lung tissue volume and mass in patients with breast cancer treated with an optimized tangential-field technique with and without a deep inspiration breath-hold (DIBH) technique and its impact on the normal-tissue complication probability (NTCP). Material and Methods: Computed tomography datasets of 60 patients in normal breathing (NB) and subsequently in DIBH were compared. With a Real-Time Position Management Respiratory Gating System (RPM), anteroposterior movement of the chest wall was monitored and a lower and upper threshold were defined. Ipsilateral lung and a restricted tangential region of the lung were delineated and the mean and maximum doses calculated. Irradiated lung tissue mass was computed based on density values. NTCP for lung was calculated using a modified Lyman-Kutcher-Burman (LKB) model. Results: Mean dose to the ipsilateral lung in DIBH versus NB was significantly reduced by 15%. Mean lung mass calculation in the restricted area receiving ≤ 20 Gy (M 20 ) was reduced by 17% in DIBH but associated with an increase in volume. NTCP showed an improvement in DIBH of 20%. The correlation of individual breathing amplitude with NTCP proved to be independent. Conclusion: The delineation of a restricted area provides the lung mass calculation in patients treated with tangential fields. DIBH reduces ipsilateral lung dose by inflation so that less tissue remains in the irradiated region and its efficiency is supported by a decrease of NTCP. (orig.)

  19. Two-dimensional static manipulation tasks: does force coordination depend on change of the tangential force direction?

    Science.gov (United States)

    Uygur, Mehmet; Jin, Xin; Knezevic, Olivera; Jaric, Slobodan

    2012-10-01

    Coordination of the grip force (GF) with a tangential force (TF, often referred to as load force) exerted along a certain line in space (i.e., one-dimensional tasks) during object manipulation has proved both to be high and based on feed-forward neural control mechanisms. However, GF-TF coordination deteriorates when the TF of one-dimensional task consecutively switches its direction (bidirectional task). In the present study, we aimed to explore GF-TF coordination in the generally neglected multi-dimensional manipulations. We hypothesized that the coordination would depend on the number of unidirectional and bidirectional orthogonal components of a two-dimensional TF exertion. Fourteen subjects traced various circular TF patterns and their orthogonal diameters shown on a computer screen by exerting a static TF. As expected, the unidirectional tasks revealed higher GF-TF coordination than the bidirectional ones (e.g., higher GF-TF correlations and GF gains, and lower GF/TF ratio). Regarding the circular tasks, most of the data were in line with the hypothesis revealing higher coordination associated with higher number of unidirectional components. Of particular importance could be that the circular tasks also revealed prominent time lags of GF with respect to TF, suggesting involvement of feedback mechanisms. We conclude that the force coordination in bidirectional static manipulations could be affected by changes in TF direction along either of its orthogonal components. The time lags observed from the circular tasks could be a consequence of the activity of sensory afferents, rather than of the visual feedback provided or the task complexity.

  20. Forward-planned, multiple-segment, tangential fields with concomitant boost in the treatment of breast cancer

    International Nuclear Information System (INIS)

    Mayo, Charles; Lo, Y.C.; Fitzgerald, Thomas J.; Urie, Marcia

    2004-01-01

    We report on the utility of forward-planned, 3-dimensional (3D), multiple-segment tangential fields for radiation treatment of patients with breast cancer. The technique accurately targets breast tissue and the tumor bed and reduces dose inhomogeneity in the target. By decreasing excess dose to the skin and lung, a concomitant boost to the tumor bed can be delivered during the initial treatment, thereby decreasing the overall treatment time by one week. More than 120 breast cancer patients have been treated with this breast conservation technique in our clinic. For each patient, a 3D treatment plan based upon breast and tumor bed volumes delineated on computed tomography (CT) was developed. Segmented tangent fields were iteratively created to reduce 'hot spots' produced by traditional tangents. The tumor bed received a concomitant boost with additional conformal photon beams. The final tumor bed boost was delivered either with conformal photon beams or conventional electron beams. All patients received 45 Gy to the breast target, plus an additional 5 Gy to the surgical excision site, bringing the total dose to 50 Gy to the boost target volume in 25 fractions. The final boost to the excision site brought the total target dose to 60 Gy. With minimum follow-up of 4 months and median follow-up of 11 months, all patients have excellent cosmetic results. There has been minimal breast edema and minimal skin changes. There have been no local relapses to date. Forward planning of multi-segment fields is facilitated with 3D planning and multileaf collimation. The treatment technique offers improvement in target dose homogeneity and the ability to confidently concomitantly boost the excision site. The technique also offers the advantage for physics and therapy staff to develop familiarity with multiple segment fields, as a precursor to intensity-modulated radiation therapy (IMRT) techniques

  1. 78 FR 42009 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-07-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... assumptions--for paying plan benefits under terminating single-employer plans covered by title IV of the... assumptions are intended to reflect current conditions in the financial and annuity markets. Assumptions under...

  2. 78 FR 11093 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-02-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... assumptions--for paying plan benefits under terminating single-employer plans covered by title IV of the... assumptions are intended to reflect current conditions in the financial and annuity markets. Assumptions under...

  3. New media in strategy – mapping assumptions in the field

    DEFF Research Database (Denmark)

    Gulbrandsen, Ib Tunby; Plesner, Ursula; Raviola, Elena

    2018-01-01

    There is plenty of empirical evidence for claiming that new media make a difference for how strategy is conceived and executed. Furthermore, there is a rapidly growing body of literature that engages with this theme, and offers recommendations regarding the appropriate strategic actions in relation...... to new media. By contrast, there is relatively little attention to the assumptions behind strategic thinking in relation to new media. This article reviews the most influential strategy journals, asking how new media are conceptualized. It is shown that strategy scholars have a tendency to place...... themselves in either a deterministic or at volontaristic camp with regards to technology. Strategy is portrayed as either determined by new media or a matter of rationally using them. Additionally, most articles portray the organization nicely delineated entity, where new media are relevant either...

  4. Commentary: profiling by appearance and assumption: beyond race and ethnicity.

    Science.gov (United States)

    Sapién, Robert E

    2010-04-01

    In this issue, Acquaviva and Mintz highlight issues regarding racial profiling in medicine and how it is perpetuated through medical education: Physicians are taught to make subjective determinations of race and/or ethnicity in case presentations, and such assumptions may affect patient care. The author of this commentary believes that the discussion should be broadened to include profiling on the basis of general appearance. The author reports personal experiences as someone who has profiled and been profiled by appearance-sometimes by skin color, sometimes by other physical attributes. In the two cases detailed here, patient care could have been affected had the author not become aware of his practices in such situations. The author advocates raising awareness of profiling in the broader sense through training.

  5. HYPROLOG: A New Logic Programming Language with Assumptions and Abduction

    DEFF Research Database (Denmark)

    Christiansen, Henning; Dahl, Veronica

    2005-01-01

    . The language shows a novel flexibility in the interaction between the different paradigms, including all additional built-in predicates and constraints solvers that may be available. Assumptions and abduction are especially useful for language processing, and we can show how HYPROLOG works seamlessly together...... with the grammar notation provided by the underlying Prolog system. An operational semantics is given which complies with standard declarative semantics for the ``pure'' sublanguages, while for the full HYPROLOG language, it must be taken as definition. The implementation is straightforward and seems to provide...... for abduction, the most efficient of known implementations; the price, however, is a limited use of negations. The main difference wrt.\\ previous implementations of abduction is that we avoid any level of metainterpretation by having Prolog execute the deductive steps directly and by treating abducibles (and...

  6. Experimental assessment of unvalidated assumptions in classical plasticity theory.

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.

    2009-01-01

    This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

  7. Deconstructing Community for Conservation: Why Simple Assumptions are Not Sufficient.

    Science.gov (United States)

    Waylen, Kerry Ann; Fischer, Anke; McGowan, Philip J K; Milner-Gulland, E J

    2013-01-01

    Many conservation policies advocate engagement with local people, but conservation practice has sometimes been criticised for a simplistic understanding of communities and social context. To counter this, this paper explores social structuring and its influences on conservation-related behaviours at the site of a conservation intervention near Pipar forest, within the Seti Khola valley, Nepal. Qualitative and quantitative data from questionnaires and Rapid Rural Appraisal demonstrate how links between groups directly and indirectly influence behaviours of conservation relevance (including existing and potential resource-use and proconservation activities). For low-status groups the harvesting of resources can be driven by others' preference for wild foods, whilst perceptions of elite benefit-capture may cause reluctance to engage with future conservation interventions. The findings reiterate the need to avoid relying on simple assumptions about 'community' in conservation, and particularly the relevance of understanding relationships between groups, in order to understand natural resource use and implications for conservation.

  8. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  9. Nonlinear dynamics in work groups with Bion's basic assumptions.

    Science.gov (United States)

    Dal Forno, Arianna; Merlone, Ugo

    2013-04-01

    According to several authors Bion's contribution has been a landmark in the thought and conceptualization of the unconscious functioning of human beings in groups. We provide a mathematical model of group behavior in which heterogeneous members may behave as if shared to different degrees what in Bion's theory is a common basic assumption. Our formalization combines both individual characteristics and group dynamics. By this formalization we analyze the group dynamics as the result of the individual dynamics of the members and prove that, under some conditions, each individual reproduces the group dynamics in a different scale. In particular, we provide an example in which the chaotic behavior of the group is reflected in each member.

  10. Unconditionally Secure and Universally Composable Commitments from Physical Assumptions

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Scafuro, Alessandra

    2013-01-01

    the usefulness of our compiler by providing two (constant-round) instantiations of ideal straight-line extractable commitment based on (malicious) PUFs [36] and stateless tamper-proof hardware tokens [26], therefore achieving the first unconditionally UC-secure commitment with malicious PUFs and stateless tokens......We present a constant-round unconditional black-box compiler that transforms any ideal (i.e., statistically-hiding and statistically-binding) straight-line extractable commitment scheme, into an extractable and equivocal commitment scheme, therefore yielding to UC-security [9]. We exemplify......, respectively. Our constructions are secure for adversaries creating arbitrarily malicious stateful PUFs/tokens. Previous results with malicious PUFs used either computational assumptions to achieve UC-secure commitments or were unconditionally secure but only in the indistinguishability sense [36]. Similarly...

  11. Assumptions of Customer Knowledge Enablement in the Open Innovation Process

    Directory of Open Access Journals (Sweden)

    Jokubauskienė Raminta

    2017-08-01

    Full Text Available In the scientific literature, open innovation is one of the most effective means to innovate and gain a competitive advantage. In practice, there is a variety of open innovation activities, but, nevertheless, customers stand as the cornerstone in this area, since the customers’ knowledge is one of the most important sources of new knowledge and ideas. Evaluating the context where are the interactions of open innovation and customer knowledge enablement, it is necessary to take into account the importance of customer knowledge management. Increasingly it is highlighted that customers’ knowledge management facilitates the creation of innovations. However, it should be an examination of other factors that influence the open innovation, and, at the same time, customers’ knowledge management. This article presents a theoretical model, which reveals the assumptions of open innovation process and the impact on the firm’s performance.

  12. Dynamic Group Diffie-Hellman Key Exchange under standard assumptions

    International Nuclear Information System (INIS)

    Bresson, Emmanuel; Chevassut, Olivier; Pointcheval, David

    2002-01-01

    Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model

  13. Breakdown of Hydrostatic Assumption in Tidal Channel with Scour Holes

    Directory of Open Access Journals (Sweden)

    Chunyan Li

    2016-10-01

    Full Text Available Hydrostatic condition is a common assumption in tidal and subtidal motions in oceans and estuaries.. Theories with this assumption have been largely successful. However, there is no definite criteria separating the hydrostatic from the non-hydrostatic regimes in real applications because real problems often times have multiple scales. With increased refinement of high resolution numerical models encompassing smaller and smaller spatial scales, the need for non-hydrostatic models is increasing. To evaluate the vertical motion over bathymetric changes in tidal channels and assess the validity of the hydrostatic approximation, we conducted observations using a vessel-based acoustic Doppler current profiler (ADCP. Observations were made along a straight channel 18 times over two scour holes of 25 m deep, separated by 330 m, in and out of an otherwise flat 8 m deep tidal pass leading to the Lake Pontchartrain over a time period of 8 hours covering part of the diurnal tidal cycle. Out of the 18 passages over the scour holes, 11 of them showed strong upwelling and downwelling which resulted in the breakdown of hydrostatic condition. The maximum observed vertical velocity was ~ 0.35 m/s, a high value in a tidal channel, and the estimated vertical acceleration reached a high value of 1.76×10-2 m/s2. Analysis demonstrated that the barotropic non-hydrostatic acceleration was dominant. The cause of the non-hydrostatic flow was the that over steep slopes. This demonstrates that in such a system, the bathymetric variation can lead to the breakdown of hydrostatic conditions. Models with hydrostatic restrictions will not be able to correctly capture the dynamics in such a system with significant bathymetric variations particularly during strong tidal currents.

  14. Halo-independent direct detection analyses without mass assumptions

    International Nuclear Information System (INIS)

    Anderson, Adam J.; Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew

    2015-01-01

    Results from direct detection experiments are typically interpreted by employing an assumption about the dark matter velocity distribution, with results presented in the m χ −σ n plane. Recently methods which are independent of the DM halo velocity distribution have been developed which present results in the v min −g-tilde plane, but these in turn require an assumption on the dark matter mass. Here we present an extension of these halo-independent methods for dark matter direct detection which does not require a fiducial choice of the dark matter mass. With a change of variables from v min to nuclear recoil momentum (p R ), the full halo-independent content of an experimental result for any dark matter mass can be condensed into a single plot as a function of a new halo integral variable, which we call h-til-tilde(p R ). The entire family of conventional halo-independent g-tilde(v min ) plots for all DM masses are directly found from the single h-tilde(p R ) plot through a simple rescaling of axes. By considering results in h-tilde(p R ) space, one can determine if two experiments are inconsistent for all masses and all physically possible halos, or for what range of dark matter masses the results are inconsistent for all halos, without the necessity of multiple g-tilde(v min ) plots for different DM masses. We conduct a sample analysis comparing the CDMS II Si events to the null results from LUX, XENON10, and SuperCDMS using our method and discuss how the results can be strengthened by imposing the physically reasonable requirement of a finite halo escape velocity

  15. Flashback analysis in tangential swirl burners; Analisis de reflujo de flama en combustores tangenciales de flujo giratorio

    Energy Technology Data Exchange (ETDEWEB)

    Valera-Medina, A. [CIATEQ A.C., Centro de Tecnologia Avanzada, Queretaro (Mexico)]. E-mail: agustin.valera@ciateq.mx; Syred, N. Abdulsada, M. [United Kingdom Cardiff University (United Kingdom)]. E-mails: syredn@cf.ac.uk; abdulsadam@cf.ac.uk

    2011-10-15

    Premixed lean combustion is widely used in Combustion Processes due to the benefits of good flame stability and blow off limits coupled with low NO{sub x} emissions. However, the use of novel fuels and complex flows have increased the concern about flashback, especially for the use of syngas and highly hydrogen enriched blends. Thus, this paper describes a combined practical and numerical approach to study the phenomenon in order to reduce the effect of flashback in a pilot scale 100 kW tangential swirl burner. Natural gas is used to establish the baseline results and effects of different parameters changes. The flashback phenomenon is studied with the use of high speed photography. The use of a central fuel injector demonstrates substantial benefits in terms of flashback resistance, eliminating coherent structures that may appear in the flow channels. The critical boundary velocity gradient is used for characterization, both via the original Lewis and von Elbe formula and via analysis using CFD and investigation of boundary layer conditions in the flame front. [Spanish] La combustion ligera premezclada se utiliza ampliamente en los procesos de combustion debido a los beneficios que brinda en terminos de buena estabilidad de flama y limites de extincion, aunado a la baja emision de NO{sub x}. Sin embargo, el uso de nuevos combustibles y de flujos complejos han incrementado la preocupacion por el reflujo de flama, especialmente para el uso de gas sintetico (syngas) y mezclas altamente hidrogenadas. Por ello, en este articulo se describe un metodo practico y numerico para el estudio del fenomeno a modo de reducir los efectos del reflujo de flama en un combustor piloto de tipo tangencial de flujo giratorio de 100 kW. Se usa gas natural para establecer la linea base de resultados y los efectos del cambio de diferentes parametros. El fenomeno de reflujo de flama se estudia por medio de fotografia de rapida adquisicion. El uso de un inyector central de combustible

  16. Weak convergence of Jacobian determinants under asymmetric assumptions

    Directory of Open Access Journals (Sweden)

    Teresa Alberico

    2012-05-01

    Full Text Available Let $\\Om$ be a bounded open set in $\\R^2$ sufficiently smooth and $f_k=(u_k,v_k$ and $f=(u,v$ mappings belong to the Sobolev space $W^{1,2}(\\Om,\\R^2$. We prove that if the sequence of Jacobians $J_{f_k}$ converges to a measure $\\mu$ in sense of measures andif one allows different assumptions on the two components of $f_k$ and $f$, e.g.$$u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,2}(\\Om \\qquad \\, v_k \\rightharpoonup v \\;\\;\\mbox{weakly in} \\;\\; W^{1,q}(\\Om$$for some $q\\in(1,2$, then\\begin{equation}\\label{0}d\\mu=J_f\\,dz.\\end{equation}Moreover, we show that this result is optimal in the sense that conclusion fails for $q=1$.On the other hand, we prove that \\eqref{0} remains valid also if one considers the case $q=1$, but it is necessary to require that $u_k$ weakly converges to $u$ in a Zygmund-Sobolev space with a slightly higher degree of regularity than $W^{1,2}(\\Om$ and precisely$$ u_k \\rightharpoonup u \\;\\;\\mbox{weakly in} \\;\\; W^{1,L^2 \\log^\\alpha L}(\\Om$$for some $\\alpha >1$.    

  17. On some unwarranted tacit assumptions in cognitive neuroscience.

    Science.gov (United States)

    Mausfeld, Rainer

    2012-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input-output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings.

  18. On Some Unwarranted Tacit Assumptions in Cognitive Neuroscience†

    Science.gov (United States)

    Mausfeld, Rainer

    2011-01-01

    The cognitive neurosciences are based on the idea that the level of neurons or neural networks constitutes a privileged level of analysis for the explanation of mental phenomena. This paper brings to mind several arguments to the effect that this presumption is ill-conceived and unwarranted in light of what is currently understood about the physical principles underlying mental achievements. It then scrutinizes the question why such conceptions are nevertheless currently prevailing in many areas of psychology. The paper argues that corresponding conceptions are rooted in four different aspects of our common-sense conception of mental phenomena and their explanation, which are illegitimately transferred to scientific enquiry. These four aspects pertain to the notion of explanation, to conceptions about which mental phenomena are singled out for enquiry, to an inductivist epistemology, and, in the wake of behavioristic conceptions, to a bias favoring investigations of input–output relations at the expense of enquiries into internal principles. To the extent that the cognitive neurosciences methodologically adhere to these tacit assumptions, they are prone to turn into a largely a-theoretical and data-driven endeavor while at the same time enhancing the prospects for receiving widespread public appreciation of their empirical findings. PMID:22435062

  19. Are Gaussian spectra a viable perceptual assumption in color appearance?

    Science.gov (United States)

    Mizokami, Yoko; Webster, Michael A

    2012-02-01

    Natural illuminant and reflectance spectra can be roughly approximated by a linear model with as few as three basis functions, and this has suggested that the visual system might construct a linear representation of the spectra by estimating the weights of these functions. However, such models do not accommodate nonlinearities in color appearance, such as the Abney effect. Previously, we found that these nonlinearities are qualitatively consistent with a perceptual inference that stimulus spectra are instead roughly Gaussian, with the hue tied to the inferred centroid of the spectrum [J. Vision 6(9), 12 (2006)]. Here, we examined to what extent a Gaussian inference provides a sufficient approximation of natural color signals. Reflectance and illuminant spectra from a wide set of databases were analyzed to test how well the curves could be fit by either a simple Gaussian with three parameters (amplitude, peak wavelength, and standard deviation) versus the first three principal component analysis components of standard linear models. The resulting Gaussian fits were comparable to linear models with the same degrees of freedom, suggesting that the Gaussian model could provide a plausible perceptual assumption about stimulus spectra for a trichromatic visual system. © 2012 Optical Society of America

  20. PKreport: report generation for checking population pharmacokinetic model assumptions

    Directory of Open Access Journals (Sweden)

    Li Jun

    2011-05-01

    Full Text Available Abstract Background Graphics play an important and unique role in population pharmacokinetic (PopPK model building by exploring hidden structure among data before modeling, evaluating model fit, and validating results after modeling. Results The work described in this paper is about a new R package called PKreport, which is able to generate a collection of plots and statistics for testing model assumptions, visualizing data and diagnosing models. The metric system is utilized as the currency for communicating between data sets and the package to generate special-purpose plots. It provides ways to match output from diverse software such as NONMEM, Monolix, R nlme package, etc. The package is implemented with S4 class hierarchy, and offers an efficient way to access the output from NONMEM 7. The final reports take advantage of the web browser as user interface to manage and visualize plots. Conclusions PKreport provides 1 a flexible and efficient R class to store and retrieve NONMEM 7 output, 2 automate plots for users to visualize data and models, 3 automatically generated R scripts that are used to create the plots; 4 an archive-oriented management tool for users to store, retrieve and modify figures, 5 high-quality graphs based on the R packages, lattice and ggplot2. The general architecture, running environment and statistical methods can be readily extended with R class hierarchy. PKreport is free to download at http://cran.r-project.org/web/packages/PKreport/index.html.

  1. Stream of consciousness: Quantum and biochemical assumptions regarding psychopathology.

    Science.gov (United States)

    Tonello, Lucio; Cocchi, Massimo; Gabrielli, Fabio; Tuszynski, Jack A

    2017-04-01

    The accepted paradigms of mainstream neuropsychiatry appear to be incompletely adequate and in various cases offer equivocal analyses. However, a growing number of new approaches are being proposed that suggest the emergence of paradigm shifts in this area. In particular, quantum theories of mind, brain and consciousness seem to offer a profound change to the current approaches. Unfortunately these quantum paradigms harbor at least two serious problems. First, they are simply models, theories, and assumptions, with no convincing experiments supporting their claims. Second, they deviate from contemporary mainstream views of psychiatric illness and do so in revolutionary ways. We suggest a possible way to integrate experimental neuroscience with quantum models in order to address outstanding issues in psychopathology. A key role is played by the phenomenon called the "stream of consciousness", which can be linked to the so-called "Gamma Synchrony" (GS), which is clearly demonstrated by EEG data. In our novel proposal, a unipolar depressed patient could be seen as a subject with an altered stream of consciousness. In particular, some clues suggest that depression is linked to an "increased power" stream of consciousness. It is additionally suggested that such an approach to depression might be extended to psychopathology in general with potential benefits to diagnostics and therapeutics in neuropsychiatry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Tangential View and Intraoperative Three-Dimensional Fluoroscopy for the Detection of Screw-Misplacements in Volar Plating of Distal Radius Fractures

    Science.gov (United States)

    Rausch, Sascha; Marintschev, Ivan; Graul, Isabel; Wilharm, Arne; Klos, Kajetan; Hofmann, Gunther O.; Florian Gras, Marc

    2015-01-01

    Background: Volar locking plate fixation has become the gold standard in the treatment of unstable distal radius fractures. Juxta-articular screws should be placed as close as possible to the subchondral zone, in an optimized length to buttress the articular surface and address the contralateral cortical bone. On the other hand, intra-articular screw misplacements will promote osteoarthritis, while the penetration of the contralateral bone surface may result in tendon irritations and ruptures. The intraoperative control of fracture reduction and implant positioning is limited in the common postero-anterior and true lateral two-dimensional (2D)-fluoroscopic views. Therefore, additional 2D-fluoroscopic views in different projections and intraoperative three-dimensional (3D) fluoroscopy were recently reported. Nevertheless, their utility has issued controversies. Objectives: The following questions should be answered in this study; 1) Are the additional tangential view and the intraoperative 3D fluoroscopy useful in the clinical routine to detect persistent fracture dislocations and screw misplacements, to prevent revision surgery? 2) Which is the most dangerous plate hole for screw misplacement? Patients and Methods: A total of 48 patients (36 females and 13 males) with 49 unstable distal radius fractures (22 x 23 A; 2 x 23 B, and 25 x 23 C) were treated with a 2.4 mm variable angle LCP Two-Column volar distal radius plate (Synthes GmbH, Oberdorf, Switzerland) during a 10-month period. After final fixation, according to the manufactures' technique guide and control of implant placement in the two common perpendicular 2D-fluoroscopic images (postero-anterior and true lateral), an additional tangential view and intraoperative 3D fluoroscopic scan were performed to control the anatomic fracture reduction and screw placements. Intraoperative revision rates due to screw misplacements (intra-articular or overlength) were evaluated. Additionally, the number of surgeons

  3. Cardiac and pulmonary dose reduction for tangentially irradiated breast cancer, utilizing deep inspiration breath-hold with audio-visual guidance, without compromising target coverage

    International Nuclear Information System (INIS)

    Vikstroem, Johan; Hjelstuen, Mari H.B.; Mjaaland, Ingvil; Dybvik, Kjell Ivar

    2011-01-01

    Background and purpose. Cardiac disease and pulmonary complications are documented risk factors in tangential breast irradiation. Respiratory gating radiotherapy provides a possibility to substantially reduce cardiopulmonary doses. This CT planning study quantifies the reduction of radiation doses to the heart and lung, using deep inspiration breath-hold (DIBH). Patients and methods. Seventeen patients with early breast cancer, referred for adjuvant radiotherapy, were included. For each patient two CT scans were acquired; the first during free breathing (FB) and the second during DIBH. The scans were monitored by the Varian RPM respiratory gating system. Audio coaching and visual feedback (audio-visual guidance) were used. The treatment planning of the two CT studies was performed with conformal tangential fields, focusing on good coverage (V95>98%) of the planning target volume (PTV). Dose-volume histograms were calculated and compared. Doses to the heart, left anterior descending (LAD) coronary artery, ipsilateral lung and the contralateral breast were assessed. Results. Compared to FB, the DIBH-plans obtained lower cardiac and pulmonary doses, with equal coverage of PTV. The average mean heart dose was reduced from 3.7 to 1.7 Gy and the number of patients with >5% heart volume receiving 25 Gy or more was reduced from four to one of the 17 patients. With DIBH the heart was completely out of the beam portals for ten patients, with FB this could not be achieved for any of the 17 patients. The average mean dose to the LAD coronary artery was reduced from 18.1 to 6.4 Gy. The average ipsilateral lung volume receiving more than 20 Gy was reduced from 12.2 to 10.0%. Conclusion. Respiratory gating with DIBH, utilizing audio-visual guidance, reduces cardiac and pulmonary doses for tangentially treated left sided breast cancer patients without compromising the target coverage

  4. Poster — Thur Eve — 67: Tangential Modulated Arc Therapy (TMAT): A Novel Technique using Megavoltage Photons for the Treatment of Superficial Disease

    International Nuclear Information System (INIS)

    Hadsell, M; Xing, L; Bush, K

    2014-01-01

    We propose a new type of treatment that employs a modulated tangential photon field to provide superior coverage of complex superficial targets when compared to other commonly employed methods, and drastically reduce dose to the underlying sensitive structures often present in these cases. TMAT plans were formulated for a set of four representative cases: 1. Scalp sarcoma, 2. Posterior chest-wall sarcoma, 3. Pleural mesothelioma with intact lung, 4. Chest-wall with deep inframammary nodes. For these cases, asymmetric jaw placement, angular limitations, and central isocenter placements were used to force optimization solutions with beam lines tangential to the body surface. When compared with unrestricted modulated arcs, the tangential arc scalp treatment reduced the max and mean doses delivered to the brain by 33Gy (from 55Gy to 22Gy) and 6Gy (from 14Gy to 8Gy), respectively. In the posterior chest wall case, the V10 for the ipsilateral lung was kept below 5% impressively while retaining the 45Gy target prescription coverage by over 97%. For the breast chest-wall case, the TMAT plan achieved reductions in high dose to the ipsilateral lung and heart by a factor of 2–3 when compared to classic, laterally opposed, tangents and reduced the V5 by 40% when compared to standard modulated arcs. TMAT has outperformed the conventional modalities of treatment for superficial lesions used in our clinic. We hope that with the advent of digitally controlled linear accelerators, we can uncover further benefits of this new technique and extend its applicability to a wider section of the patient population

  5. Poster — Thur Eve — 67: Tangential Modulated Arc Therapy (TMAT): A Novel Technique using Megavoltage Photons for the Treatment of Superficial Disease

    Energy Technology Data Exchange (ETDEWEB)

    Hadsell, M; Xing, L; Bush, K [Department of Radiation Oncology, Stanford University Medical Center (United States)

    2014-08-15

    We propose a new type of treatment that employs a modulated tangential photon field to provide superior coverage of complex superficial targets when compared to other commonly employed methods, and drastically reduce dose to the underlying sensitive structures often present in these cases. TMAT plans were formulated for a set of four representative cases: 1. Scalp sarcoma, 2. Posterior chest-wall sarcoma, 3. Pleural mesothelioma with intact lung, 4. Chest-wall with deep inframammary nodes. For these cases, asymmetric jaw placement, angular limitations, and central isocenter placements were used to force optimization solutions with beam lines tangential to the body surface. When compared with unrestricted modulated arcs, the tangential arc scalp treatment reduced the max and mean doses delivered to the brain by 33Gy (from 55Gy to 22Gy) and 6Gy (from 14Gy to 8Gy), respectively. In the posterior chest wall case, the V10 for the ipsilateral lung was kept below 5% impressively while retaining the 45Gy target prescription coverage by over 97%. For the breast chest-wall case, the TMAT plan achieved reductions in high dose to the ipsilateral lung and heart by a factor of 2–3 when compared to classic, laterally opposed, tangents and reduced the V5 by 40% when compared to standard modulated arcs. TMAT has outperformed the conventional modalities of treatment for superficial lesions used in our clinic. We hope that with the advent of digitally controlled linear accelerators, we can uncover further benefits of this new technique and extend its applicability to a wider section of the patient population.

  6. Providing security assurance in line with national DBT assumptions

    Science.gov (United States)

    Bajramovic, Edita; Gupta, Deeksha

    2017-01-01

    As worldwide energy requirements are increasing simultaneously with climate change and energy security considerations, States are thinking about building nuclear power to fulfill their electricity requirements and decrease their dependence on carbon fuels. New nuclear power plants (NPPs) must have comprehensive cybersecurity measures integrated into their design, structure, and processes. In the absence of effective cybersecurity measures, the impact of nuclear security incidents can be severe. Some of the current nuclear facilities were not specifically designed and constructed to deal with the new threats, including targeted cyberattacks. Thus, newcomer countries must consider the Design Basis Threat (DBT) as one of the security fundamentals during design of physical and cyber protection systems of nuclear facilities. IAEA NSS 10 describes the DBT as "comprehensive description of the motivation, intentions and capabilities of potential adversaries against which protection systems are designed and evaluated". Nowadays, many threat actors, including hacktivists, insider threat, cyber criminals, state and non-state groups (terrorists) pose security risks to nuclear facilities. Threat assumptions are made on a national level. Consequently, threat assessment closely affects the design structures of nuclear facilities. Some of the recent security incidents e.g. Stuxnet worm (Advanced Persistent Threat) and theft of sensitive information in South Korea Nuclear Power Plant (Insider Threat) have shown that these attacks should be considered as the top threat to nuclear facilities. Therefore, the cybersecurity context is essential for secure and safe use of nuclear power. In addition, States should include multiple DBT scenarios in order to protect various target materials, types of facilities, and adversary objectives. Development of a comprehensive DBT is a precondition for the establishment and further improvement of domestic state nuclear-related regulations in the

  7. Investigation of the flow, combustion, heat-transfer and emissions from a 609MW utility tangentially fired pulverized-coal boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Caillat, Sébastien; Harion, Jean-Luc.

    2002-01-01

    A numerical approach is given to investigate the performance of a 609 MW tangentially fired pulverized-coal boiler, with emphasis on formation mechanism of gas flow deviation and uneven wall temperature in crossover pass and on NOx emission. To achieve this purpose and obtain a reliable solution......, some different strategies with the existing researches are used. Good agreement of simulation results with global design parameters and site operation records indicates this simulation is pretty reasonable and thus the conclusions of the gas flow deviation, emissions, combustion and heat transfer...

  8. 7 CFR 3550.163 - Transfer of security and assumption of indebtedness.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Transfer of security and assumption of indebtedness... § 3550.163 Transfer of security and assumption of indebtedness. (a) General policy. RHS mortgages contain... transferred with an assumption of the indebtedness. If it is in the best interest of the Government, RHS will...

  9. School Principals' Assumptions about Human Nature: Implications for Leadership in Turkey

    Science.gov (United States)

    Sabanci, Ali

    2008-01-01

    This article considers principals' assumptions about human nature in Turkey and the relationship between the assumptions held and the leadership style adopted in schools. The findings show that school principals hold Y-type assumptions and prefer a relationship-oriented style in their relations with assistant principals. However, both principals…

  10. Challenging Assumptions of International Public Relations: When Government Is the Most Important Public.

    Science.gov (United States)

    Taylor, Maureen; Kent, Michael L.

    1999-01-01

    Explores assumptions underlying Malaysia's and the United States' public-relations practice. Finds many assumptions guiding Western theories and practices are not applicable to other countries. Examines the assumption that the practice of public relations targets a variety of key organizational publics. Advances international public-relations…

  11. 75 FR 63380 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2010-10-15

    ...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in November 2010... title IV of the Employee Retirement Income Security Act of 1974. ] PBGC uses the interest assumptions in... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  12. 76 FR 2578 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-01-14

    ...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in February 2011... title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  13. 77 FR 74353 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-12-14

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  14. 78 FR 2881 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-01-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  15. 77 FR 28477 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-05-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in June... title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in the... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  16. 78 FR 62426 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-10-22

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  17. 77 FR 8730 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-02-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... same. The interest assumptions are intended to reflect current conditions in the financial and annuity...

  18. 77 FR 41270 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-07-13

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... same. The interest assumptions are intended to reflect current conditions in the financial and annuity...

  19. 76 FR 41689 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-07-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in... same. The interest assumptions are intended to reflect current conditions in the financial and annuity...

  20. 77 FR 68685 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-11-16

    ... regulation for valuation dates in December 2012. The interest assumptions are used for paying benefits under... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  1. 77 FR 22215 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-04-13

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in May... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  2. 78 FR 49682 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-08-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  3. 78 FR 68739 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-11-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions in the... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  4. 75 FR 69588 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2010-11-15

    ... interest assumptions under the regulation for valuation dates in December 2010. Interest assumptions are...--for paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...

  5. 77 FR 62433 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-10-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... interest assumptions--for paying plan benefits under terminating single-employer plans covered by title IV... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  6. 76 FR 8649 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-02-15

    ...-Employer Plans to prescribe interest assumptions under the regulation for valuation dates in March 2011... title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in... interest assumptions are intended to reflect current conditions in the financial and annuity markets...

  7. 77 FR 48855 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-08-15

    ... to prescribe interest assumptions under the regulation for valuation dates in September 2012. The... interest assumptions are intended to reflect current conditions in the financial and annuity markets... Assets in Single-Employer Plans (29 CFR part 4044) prescribes interest assumptions for valuing benefits...

  8. Exploring the Influence of Ethnicity, Age, and Trauma on Prisoners' World Assumptions

    Science.gov (United States)

    Gibson, Sandy

    2011-01-01

    In this study, the author explores world assumptions of prisoners, how these assumptions vary by ethnicity and age, and whether trauma history affects world assumptions. A random sample of young and old prisoners, matched for prison location, was drawn from the New Jersey Department of Corrections prison population. Age and ethnicity had…

  9. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    Science.gov (United States)

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  10. Structure optimization of a grain impact piezoelectric sensor and its application for monitoring separation losses on tangential-axial combine harvesters.

    Science.gov (United States)

    Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang

    2015-01-14

    Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice.

  11. A piezoelectric active sensing method for quantitative monitoring of bolt loosening using energy dissipation caused by tangential damping based on the fractal contact theory

    Science.gov (United States)

    Wang, Furui; Huo, Linsheng; Song, Gangbing

    2018-01-01

    Monitoring of bolt looseness is essential for ensuring the safety and reliability of equipment and structures with bolted connections. It is well known that tangential damping has an important influence on energy dissipation during wave propagation across the bolted joints, which require different levels of preload. In this paper, the energy dissipation generated by tangential damping of the bolted joints under different bolt preloads was modeled analytically based on fractal contact theory, which took the imperfect interface into account. A saturation value exists with the increase of the bolt preload, and the center frequency of emitted signal is demonstrated to affect the received energy significantly. Compared with previous similar studies based on experimental techniques and numerical method, the investigation presented in this paper explains the phenomenon from the inherent mechanism, and achieves the accurate quantitative monitoring of bolt looseness directly, rather than an indirect failure index. Finally, the validity of the proposed method in this paper was demonstrated with an experimental study of a bolted joint with different preload levels.

  12. Being Explicit about Underlying Values, Assumptions and Views when Designing for Children in the IDC Community

    DEFF Research Database (Denmark)

    Skovbjerg, Helle Marie; Bekker, Tilde; Barendregt, Wolmet

    2016-01-01

    In this full-day workshop we want to discuss how the IDC community can make underlying assumptions, values and views regarding children and childhood in making design decisions more explicit. What assumptions do IDC designers and researchers make, and how can they be supported in reflecting......, and intends to share different approaches for uncovering and reflecting on values, assumptions and views about children and childhood in design....

  13. Evaluation of coat uniformity and taste-masking efficiency of irregular-shaped drug particles coated in a modified tangential spray fluidized bed processor.

    Science.gov (United States)

    Xu, Min; Heng, Paul Wan Sia; Liew, Celine Valeria

    2015-01-01

    To explore the feasibility of coating irregular-shaped drug particles in a modified tangential spray fluidized bed processor (FS processor) and evaluate the coated particles for their coat uniformity and taste-masking efficiency. Paracetamol particles were coated to 20%, w/w weight gain using a taste-masking polymer insoluble in neutral and basic pH but soluble in acidic pH. In-process samples (5, 10 and 15%, w/w coat) and the resultant coated particles (20%, w/w coat) were collected to monitor the changes in their physicochemical attributes. After coating to 20%, w/w coat weight gain, the usable yield was 81% with minimal agglomeration (processor shows promise for direct coating of irregular-shaped drug particles with wide size distribution. The coated particles with 15% coat were sufficiently taste masked and could be useful for further application in orally disintegrating tablet platforms.

  14. Analytical and numerical calculation of magnetic field distribution in the slotted air-gap of tangential surface permanent-magnet motors

    Directory of Open Access Journals (Sweden)

    Boughrara Kamel

    2009-01-01

    Full Text Available This paper deals with the analytical and numerical analysis of the flux density distribution in the slotted air gap of permanent magnet motors with surface mounted tangentially magnetized permanent magnets. Two methods for magnetostatic field calculations are developed. The first one is an analytical method in which the effect of stator slots is taken into account by modulating the magnetic field distribution by the complex relative air gap permeance. The second one is a numerical method using 2-D finite element analysis with consideration of Dirichlet and anti-periodicity (periodicity boundary conditions and Lagrange Multipliers for simulation of movement. The results obtained by the analytical method are compared to the results of finite-element analysis.

  15. Experimental and finite element study of the effect of temperature and moisture on the tangential tensile strength and fracture behavior in timber logs

    DEFF Research Database (Denmark)

    Larsen, Finn; Ormarsson, Sigurdur

    2014-01-01

    to a moisture content (MC) of 18% before TSt tests at 20°C, 60°C, and 90°C were carried out. The maximum stress results of the disc simulations by FEM were compared with the experimental strength results at the same temperature levels. There is a rather good agreement between the results of modeling......Timber is normally dried by kiln drying, in the course of which moisture-induced stresses and fractures can occur. Cracks occur primarily in the radial direction due to tangential tensile strength (TSt) that exceeds the strength of the material. The present article reports on experiments...... and numerical simulations by finite element modeling (FEM) concerning the TSt and fracture behavior of Norway spruce under various climatic conditions. Thin log disc specimens were studied to simplify the description of the moisture flow in the samples. The specimens designed for TS were acclimatized...

  16. Dialogic or Dialectic? The Significance of Ontological Assumptions in Research on Educational Dialogue

    Science.gov (United States)

    Wegerif, Rupert

    2008-01-01

    This article explores the relationship between ontological assumptions and studies of educational dialogue through a focus on Bakhtin's "dialogic". The term dialogic is frequently appropriated to a modernist framework of assumptions, in particular the neo-Vygotskian or sociocultural tradition. However, Vygotsky's theory of education is dialectic,…

  17. Making Sense out of Sex Stereotypes in Advertising: A Feminist Analysis of Assumptions.

    Science.gov (United States)

    Ferrante, Karlene

    Sexism and racism in advertising have been well documented, but feminist research aimed at social change must go beyond existing content analyses to ask how advertising is created. Analysis of the "mirror assumption" (advertising reflects society) and the "gender assumption" (advertising speaks in a male voice to female…

  18. Assumptions about Ecological Scale and Nature Knowing Best Hiding in Environmental Decisions

    Science.gov (United States)

    R. Bruce Hull; David P. Robertson; David Richert; Erin Seekamp; Gregory J. Buhyoff

    2002-01-01

    Assumptions about nature are embedded in people's preferences for environmental policy and management. The people we interviewed justified preservationist policies using four assumptions about nature knowing best: nature is balanced, evolution is progressive, technology is suspect, and the Creation is perfect. They justified interventionist policies using three...

  19. Recognising the Effects of Costing Assumptions in Educational Business Simulation Games

    Science.gov (United States)

    Eckardt, Gordon; Selen, Willem; Wynder, Monte

    2015-01-01

    Business simulations are a powerful way to provide experiential learning that is focussed, controlled, and concentrated. Inherent in any simulation, however, are numerous assumptions that determine feedback, and hence the lessons learnt. In this conceptual paper we describe some common cost assumptions that are implicit in simulation design and…

  20. Food-based dietary guidelines : some assumptions tested for the Netherlands

    NARCIS (Netherlands)

    Löwik, M.R.H.; Hulshof, K.F.A.M.; Brussaard, J.H.

    1999-01-01

    Recently, the concept of food-based dietary guidelines has been introduced by WHO and FAO. For this concept, several assumptions were necessary. The validity and potential consequences of some of these assumptions are discussed in this paper on the basis of the Dutch National Food Consumption

  1. 76 FR 63836 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-10-14

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in...-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The interest... regulation are the same. The interest assumptions are intended to reflect current conditions in the financial...

  2. 77 FR 2015 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2012-01-13

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... terminating single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974... the financial and annuity markets. Assumptions under the benefit payments regulation are updated...

  3. 78 FR 22192 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-04-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in May... paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...

  4. 78 FR 28490 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2013-05-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in June... paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...

  5. 76 FR 27889 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-05-13

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in June...--for paying plan benefits under terminating single-employer plans covered by title IV of the Employee... reflect current conditions in the financial and annuity markets. Assumptions under the benefit payments...

  6. 76 FR 70639 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-11-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in... single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The... financial and annuity markets. Assumptions under the benefit payments regulation are updated monthly. This...

  7. 76 FR 50413 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-08-15

    ... Single- Employer Plans to prescribe interest assumptions under the regulation for valuation dates in...-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The interest... regulation are the same. The interest assumptions are intended to reflect current conditions in the financial...

  8. Sensitivity of the OMI ozone profile retrieval (OMO3PR) to a priori assumptions

    NARCIS (Netherlands)

    Mielonen, T.; De Haan, J.F.; Veefkind, J.P.

    2014-01-01

    We have assessed the sensitivity of the operational OMI ozone profile retrieval (OMO3PR) algorithm to a number of a priori assumptions. We studied the effect of stray light correction, surface albedo assumptions and a priori ozone profiles on the retrieved ozone profile. Then, we studied how to

  9. The Role of Policy Assumptions in Validating High-stakes Testing Programs.

    Science.gov (United States)

    Kane, Michael

    L. Cronbach has made the point that for validity arguments to be convincing to diverse audiences, they need to be based on assumptions that are credible to these audiences. The interpretations and uses of high stakes test scores rely on a number of policy assumptions about what should be taught in schools, and more specifically, about the content…

  10. The Arundel Assumption And Revision Of Some Large-Scale Maps ...

    African Journals Online (AJOL)

    The rather common practice of stating or using the Arundel Assumption without reference to appropriate mapping standards (except mention of its use for graphical plotting) is a major cause of inaccuracies in map revision. This paper describes an investigation to ascertain the applicability of the Assumption to the revision of ...

  11. Implicit Assumptions in Special Education Policy: Promoting Full Inclusion for Students with Learning Disabilities

    Science.gov (United States)

    Kirby, Moira

    2017-01-01

    Introduction: Everyday millions of students in the United States receive special education services. Special education is an institution shaped by societal norms. Inherent in these norms are implicit assumptions regarding disability and the nature of special education services. The two dominant implicit assumptions evident in the American…

  12. A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States

    Science.gov (United States)

    Ryff, Luiz Carlos

    1996-01-01

    A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.

  13. 7 CFR 765.402 - Transfer of security and loan assumption on same rates and terms.

    Science.gov (United States)

    2010-01-01

    ... of Security and Assumption of Debt § 765.402 Transfer of security and loan assumption on same rates... comprised solely of family members of the borrower assumes the debt along with the original borrower; (c) An individual with an ownership interest in the borrower entity buys the entire ownership interest of the other...

  14. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    Science.gov (United States)

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  15. Statistical power to detect violation of the proportional hazards assumption when using the Cox regression model.

    Science.gov (United States)

    Austin, Peter C

    2018-01-01

    The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.

  16. Assumptions for well-known statistical techniques: Disturbing explanations for why they are seldom checked

    Directory of Open Access Journals (Sweden)

    Rink eHoekstra

    2012-05-01

    Full Text Available A valid interpretation of most statistical techniques requires that the criteria for one or more assumptions are met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another, more disquieting, explanation would be that violations of assumptions are hardly checked for in the first place. In this article a study is presented on whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. They were asked to analyze the data as they would their own data, for which often used and well-known techniques like the t-procedure, ANOVA and regression were required. It was found that they hardly ever checked for violations of assumptions. Interviews afterwards revealed that mainly lack of knowledge and nonchalance, rather than more rational reasons like being aware of the robustness of a technique or unfamiliarity with an alternative, seem to account for this behavior. These data suggest that merely encouraging people to check for violations of assumptions will not lead them to do so, and that the use of statistics is opportunistic.

  17. The crux of the method: assumptions in ordinary least squares and logistic regression.

    Science.gov (United States)

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  18. Validation/Uncertainty Quantification for Large Eddy Simulations of the heat flux in the Tangentially Fired Oxy-Coal Alstom Boiler Simulation Facility

    Energy Technology Data Exchange (ETDEWEB)

    Smith, P.J.; Eddings, E.G.; Ring, T.; Thornock, J.; Draper, T.; Isaac, B.; Rezeai, D.; Toth, P.; Wu, Y.; Kelly, K.

    2014-08-01

    The objective of this task is to produce predictive capability with quantified uncertainty bounds for the heat flux in commercial-scale, tangentially fired, oxy-coal boilers. Validation data came from the Alstom Boiler Simulation Facility (BSF) for tangentially fired, oxy-coal operation. This task brings together experimental data collected under Alstom’s DOE project for measuring oxy-firing performance parameters in the BSF with this University of Utah project for large eddy simulation (LES) and validation/uncertainty quantification (V/UQ). The Utah work includes V/UQ with measurements in the single-burner facility where advanced strategies for O2 injection can be more easily controlled and data more easily obtained. Highlights of the work include: • Simulations of Alstom’s 15 megawatt (MW) BSF, exploring the uncertainty in thermal boundary conditions. A V/UQ analysis showed consistency between experimental results and simulation results, identifying uncertainty bounds on the quantities of interest for this system (Subtask 9.1) • A simulation study of the University of Utah’s oxy-fuel combustor (OFC) focused on heat flux (Subtask 9.2). A V/UQ analysis was used to show consistency between experimental and simulation results. • Measurement of heat flux and temperature with new optical diagnostic techniques and comparison with conventional measurements (Subtask 9.3). Various optical diagnostics systems were created to provide experimental data to the simulation team. The final configuration utilized a mid-wave infrared (MWIR) camera to measure heat flux and temperature, which was synchronized with a high-speed, visible camera to utilize two-color pyrometry to measure temperature and soot concentration. • Collection of heat flux and temperature measurements in the University of Utah’s OFC for use is subtasks 9.2 and 9.3 (Subtask 9.4). Several replicates were carried to better assess the experimental error. Experiments were specifically designed for the

  19. A new scenario framework for climate change research: the concept of shared climate policy assumptions

    NARCIS (Netherlands)

    Kriegler, E.; Edmonds, J.; Hallegatte, S.; Ebi, K.L.; Kram, T.; Riahi, K.; Winkler, J.; van Vuuren, Detlef

    2014-01-01

    The new scenario framework facilitates the coupling of multiple socioeconomic reference pathways with climate model products using the representative concentration pathways. This will allow for improved assessment of climate impacts, adaptation and mitigation. Assumptions about climate policy play a

  20. Washington International Renewable Energy Conference (WIREC) 2008 Pledges. Methodology and Assumptions Summary

    Energy Technology Data Exchange (ETDEWEB)

    Babiuch, Bill [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bilello, Daniel E. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cowlin, Shannon C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wise, Alison [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2008-08-01

    This report describes the methodology and assumptions used by NREL in quantifying the potential CO2 reductions resulting from more than 140 governments, international organizations, and private-sector representatives pledging to advance the uptake of renewable energy.

  1. A note on the translation of conceptual data models into description logics: disjointness and covering assumptions

    CSIR Research Space (South Africa)

    Casini, G

    2012-10-01

    Full Text Available ). In this paper we propose two simple procedures to assist modelers with integrating these assumptions into their models, thereby allowing for a more complete translation into DLs....

  2. Tests of data quality, scaling assumptions, and reliability of the Danish SF-36

    DEFF Research Database (Denmark)

    Bjorner, J B; Damsgaard, M T; Watt, T

    1998-01-01

    We used general population data (n = 4084) to examine data completeness, response consistency, tests of scaling assumptions, and reliability of the Danish SF-36 Health Survey. We compared traditional multitrait scaling analyses to analyses using polychoric correlations and Spearman correlations...

  3. Who needs the assumption of opportunistic behavior? Transaction cost economics does not!

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2000-01-01

    The assumption of opportunistic behavior, familiar from transaction cost economics, has been and remains highly controversial. But opportunistic behavior, albeit undoubtedly an extremely important form of motivation, is not a necessary condition for the contractual problems studied by transaction...

  4. Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations.

    Science.gov (United States)

    Heckman, James

    1997-01-01

    Considers the use of instrumental variables to estimate effects of treatments on treated and randomly selected groups. Concludes that instrumental variable methods are extremely sensitive to assumptions about how people process information. (SK)

  5. Assumptions for Including Organic Food in the Gastronomic Offering of Istrian Agritourism

    Directory of Open Access Journals (Sweden)

    Pavlo Ružić

    2009-01-01

    Full Text Available The authors of this research analyze assumptions to including organic food in the gastronomic offering of Istrians agritourism. They assume that gastronomic offering of Istrian agritourism that includes organic food would be more acceptable and competitive on the tourist market. The authors analyzed their assumptions using surveys conducted in 2007 and 2008 on tourists in Istra to learn whether they prefer organic food, does organic food match modern tourist trends, and are they willing to pay more for it.

  6. The Causes and Consequences of Differing Pensions Accounting Assumptions in UK Pension Schemes

    OpenAIRE

    Thomas, Gareth

    2006-01-01

    Anecdotal evidence and a number of empirical studies from the US suggest that the providers of corporate pension schemes may manipulate the actuarial assumptions used to estimate the value of the scheme. By manipulating the pension scheme assumptions corporations can reduce their required contribution to the scheme in order to manage their perceived performance. A sample of 92 FTSE 100 companies during the period 2002-2004 was taken and the link between corporate financial constraint and pe...

  7. Comparing Dorsal Tangential and Lateral Views of the Wrist for Detecting Dorsal Screw Penetration after Volar Plating of Distal Radius Fractures

    Directory of Open Access Journals (Sweden)

    Juan M. Giugale

    2017-01-01

    Full Text Available Background. The dorsal tangential (DT view has been shown to improve the detection of dorsal screw perforation during volar distal radius fracture fixation. Here we performed a cadaveric imaging survey study to evaluate if the DT view was uniformly beneficial for all screws. Methods. Standardized placement of fixed-angle volar distal radius plates was performed on two cadavers. Fluoroscopic images depicting variable screw perforation of each of the four screw holes on the plate were generated. A 46-image survey was distributed at a large academic medical center. Respondents were asked to answer if the screw was perforating through the dorsal cortex in each image. Statistical analysis was performed using Fisher’s exact test. A p value < .05 was considered significant. Results. The DT view offered a significantly more reliable determination of dorsal screw penetration than traditional lateral imaging for the radial-most screw at all degrees of perforation and the middle two screws at 2 mm of perforation. Residents and attendings had more accurate screw readings overall using the DT view. Conclusions. The DT view is superior to traditional lateral imaging in the detection of small amounts of dorsal perforation of the radial-most three screws of a fixed-angle volar plate.

  8. Comparing Dorsal Tangential and Lateral Views of the Wrist for Detecting Dorsal Screw Penetration after Volar Plating of Distal Radius Fractures.

    Science.gov (United States)

    Giugale, Juan M; Fourman, Mitchell S; Bielicka, Deidre L; Fowler, John R

    2017-01-01

    The dorsal tangential (DT) view has been shown to improve the detection of dorsal screw perforation during volar distal radius fracture fixation. Here we performed a cadaveric imaging survey study to evaluate if the DT view was uniformly beneficial for all screws. Standardized placement of fixed-angle volar distal radius plates was performed on two cadavers. Fluoroscopic images depicting variable screw perforation of each of the four screw holes on the plate were generated. A 46-image survey was distributed at a large academic medical center. Respondents were asked to answer if the screw was perforating through the dorsal cortex in each image. Statistical analysis was performed using Fisher's exact test. A p value < .05 was considered significant. The DT view offered a significantly more reliable determination of dorsal screw penetration than traditional lateral imaging for the radial-most screw at all degrees of perforation and the middle two screws at 2 mm of perforation. Residents and attendings had more accurate screw readings overall using the DT view. The DT view is superior to traditional lateral imaging in the detection of small amounts of dorsal perforation of the radial-most three screws of a fixed-angle volar plate.

  9. Fringe-jump corrected far infrared tangential interferometer/polarimeter for a real-time density feedback control system of NSTX plasmas.

    Science.gov (United States)

    Juhn, J-W; Lee, K C; Hwang, Y S; Domier, C W; Luhmann, N C; Leblanc, B P; Mueller, D; Gates, D A; Kaita, R

    2010-10-01

    The far infrared tangential interferometer/polarimeter (FIReTIP) of the National Spherical Torus Experiment (NSTX) has been set up to provide reliable electron density signals for a real-time density feedback control system. This work consists of two main parts: suppression of the fringe jumps that have been prohibiting the plasma density from use in the direct feedback to actuators and the conceptual design of a density feedback control system including the FIReTIP, control hardware, and software that takes advantage of the NSTX plasma control system (PCS). By investigating numerous shot data after July 2009 when the new electronics were installed, fringe jumps in the FIReTIP are well characterized, and consequently the suppressing algorithms are working properly as shown in comparisons with the Thomson scattering diagnostic. This approach is also applicable to signals taken at a 5 kHz sampling rate, which is a fundamental constraint imposed by the digitizers providing inputs to the PCS. The fringe jump correction algorithm, as well as safety and feedback modules, will be included as submodules either in the gas injection system category or a new category of density in the PCS.

  10. Numerical study of flow, combustion and emissions characteristics in a 625 MWe tangentially fired boiler with composition of coal 70% LRC and 30% MRC

    Science.gov (United States)

    Sa'adiyah, Devy; Bangga, Galih; Widodo, Wawan; Ikhwan, Nur

    2017-08-01

    Tangential fired boiler is one of the methods that can produce more complete combustion. This method applied in Suralaya Power Plant, Indonesia. However, the boiler where supposed to use low rank coal (LRC), but at a given time must be mixed with medium rank coal (MRC) from another unit because of lack of LRC coal. Accordingly to the situation, the study about choosing the right position of LRC and MRC in the burner elevation must be investigated. The composition of coal is 70%LRC / 30%MRC where MRC will be placed at the lower (A & C - Case I)) or higher (E & G - Case II) elevation as the cases in this study. The study is carried out using Computational Fluid Dynamics (CFD) method. The simulation with original case (100%LRC) has a good agreement with the measurement data. As the results, MRC is more recommended at the burner elevation A & C rather than burner elevation E & G because it has closer temperature (880 K) compared with 100%LRC and has smaller local heating area between upper side wall and front wall with the range of temperature 1900 - 2000 K. For emissions, case I has smaller NOx and higher CO2 with 104 ppm and 15,6%. Moreover, it has samller O2 residue with 5,8% due to more complete combustion.

  11. Shattering Man’s Fundamental Assumptions in Don DeLillo’s Falling Man

    Directory of Open Access Journals (Sweden)

    Hazim Adnan Hashim

    2016-09-01

    Full Text Available The present study addresses effects of traumatic events such as the September 11 attacks on victims’ fundamental assumptions. These beliefs or assumptions provide individuals with expectations about the world and their sense of self-worth. Thus, they ground people’s sense of security, stability, and orientation. The September 11 terrorist attacks in the U.S.A. were very tragic for Americans because this fundamentally changed their understandings about many aspects in life. The attacks led many individuals to build new kind of beliefs and assumptions about themselves and the world. Many writers have written about the human ordeals that followed this incident. Don DeLillo’s Falling Man reflects the traumatic repercussions of this disaster on Americans’ fundamental assumptions. The objective of this study is to examine the novel from the traumatic perspective that has afflicted the victims’ fundamental understandings of the world and the self. Individuals’ fundamental understandings could be changed or modified due to exposure to certain types of events like war, terrorism, political violence or even the sense of alienation. The Assumptive World theory of Ronnie Janoff-Bulman will be used as a framework to study the traumatic experience of the characters in Falling Man. The significance of the study lies in providing a new perception to the field of trauma that can help trauma victims to adopt alternative assumptions or reshape their previous ones to heal from traumatic effects.

  12. Post-traumatic stress and world assumptions: the effects of religious coping.

    Science.gov (United States)

    Zukerman, Gil; Korn, Liat

    2014-12-01

    Religiosity has been shown to moderate the negative effects of traumatic event experiences. The current study was designed to examine the relationship between post-traumatic stress (PTS) following traumatic event exposure; world assumptions defined as basic cognitive schemas regarding the world; and self and religious coping conceptualized as drawing on religious beliefs and practices for understanding and dealing with life stressors. This study examined 777 Israeli undergraduate students who completed several questionnaires which sampled individual world assumptions and religious coping in addition to measuring PTS, as manifested by the PTSD check list. Results indicate that positive religious coping was significantly associated with more positive world assumptions, while negative religious coping was significantly associated with more negative world assumptions. Additionally, negative world assumptions were significantly associated with more avoidance symptoms, while reporting higher rates of traumatic event exposure was significantly associated with more hyper-arousal. These findings suggest that religious-related cognitive schemas directly affect world assumptions by creating protective shields that may prevent the negative effects of confronting an extreme negative experience.

  13. Assessing framing assumptions in quantitative health impact assessments: a housing intervention example.

    Science.gov (United States)

    Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M

    2013-09-01

    Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Impacts of cloud overlap assumptions on radiative budgets and heating fields in convective regions

    Science.gov (United States)

    Wang, XiaoCong; Liu, YiMin; Bao, Qing

    2016-01-01

    Impacts of cloud overlap assumptions on radiative budgets and heating fields are explored with the aid of a cloud-resolving model (CRM), which provided cloud geometry as well as cloud micro and macro properties. Large-scale forcing data to drive the CRM are from TRMM Kwajalein Experiment and the Global Atmospheric Research Program's Atlantic Tropical Experiment field campaigns during which abundant convective systems were observed. The investigated overlap assumptions include those that were traditional and widely used in the past and the one that was recently addressed by Hogan and Illingworth (2000), in which the vertically projected cloud fraction is expressed by a linear combination of maximum and random overlap, with the weighting coefficient depending on the so-called decorrelation length Lcf. Results show that both shortwave and longwave cloud radiative forcings (SWCF/LWCF) are significantly underestimated under maximum (MO) and maximum-random (MRO) overlap assumptions, whereas remarkably overestimated under the random overlap (RO) assumption in comparison with that using CRM inherent cloud geometry. These biases can reach as high as 100 Wm- 2 for SWCF and 60 Wm- 2 for LWCF. By its very nature, the general overlap (GenO) assumption exhibits an encouraging performance on both SWCF and LWCF simulations, with the biases almost reduced by 3-fold compared with traditional overlap assumptions. The superiority of GenO assumption is also manifested in the simulation of shortwave and longwave radiative heating fields, which are either significantly overestimated or underestimated under traditional overlap assumptions. The study also pointed out the deficiency of constant assumption on Lcf in GenO assumption. Further examinations indicate that the CRM diagnostic Lcf varies among different cloud types and tends to be stratified in the vertical. The new parameterization that takes into account variation of Lcf in the vertical well reproduces such a relationship and

  15. Detecting and accounting for violations of the constancy assumption in non-inferiority clinical trials.

    Science.gov (United States)

    Koopmeiners, Joseph S; Hobbs, Brian P

    2018-05-01

    Randomized, placebo-controlled clinical trials are the gold standard for evaluating a novel therapeutic agent. In some instances, it may not be considered ethical or desirable to complete a placebo-controlled clinical trial and, instead, the placebo is replaced by an active comparator with the objective of showing either superiority or non-inferiority to the active comparator. In a non-inferiority trial, the experimental treatment is considered non-inferior if it retains a pre-specified proportion of the effect of the active comparator as represented by the non-inferiority margin. A key assumption required for valid inference in the non-inferiority setting is the constancy assumption, which requires that the effect of the active comparator in the non-inferiority trial is consistent with the effect that was observed in previous trials. It has been shown that violations of the constancy assumption can result in a dramatic increase in the rate of incorrectly concluding non-inferiority in the presence of ineffective or even harmful treatment. In this paper, we illustrate how Bayesian hierarchical modeling can be used to facilitate multi-source smoothing of the data from the current trial with the data from historical studies, enabling direct probabilistic evaluation of the constancy assumption. We then show how this result can be used to adapt the non-inferiority margin when the constancy assumption is violated and present simulation results illustrating that our method controls the type-I error rate when the constancy assumption is violated, while retaining the power of the standard approach when the constancy assumption holds. We illustrate our adaptive procedure using a non-inferiority trial of raltegravir, an antiretroviral drug for the treatment of HIV.

  16. World assumptions, posttraumatic stress and quality of life after a natural disaster: A longitudinal study

    Science.gov (United States)

    2012-01-01

    Background Changes in world assumptions are a fundamental concept within theories that explain posttraumatic stress disorder. The objective of the present study was to gain a greater understanding of how changes in world assumptions are related to quality of life and posttraumatic stress symptoms after a natural disaster. Methods A longitudinal study of 574 Norwegian adults who survived the Southeast Asian tsunami in 2004 was undertaken. Multilevel analyses were used to identify which factors at six months post-tsunami predicted quality of life and posttraumatic stress symptoms two years post-tsunami. Results Good quality of life and posttraumatic stress symptoms were negatively related. However, major differences in the predictors of these outcomes were found. Females reported significantly higher quality of life and more posttraumatic stress than men. The association between level of exposure to the tsunami and quality of life seemed to be mediated by posttraumatic stress. Negative perceived changes in the assumption “the world is just” were related to adverse outcome in both quality of life and posttraumatic stress. Positive perceived changes in the assumptions “life is meaningful” and “feeling that I am a valuable human” were associated with higher levels of quality of life but not with posttraumatic stress. Conclusions Quality of life and posttraumatic stress symptoms demonstrate differences in their etiology. World assumptions may be less specifically related to posttraumatic stress than has been postulated in some cognitive theories. PMID:22742447

  17. Quantification of Contralateral Breast Dose and Risk Estimate of Radiation-Induced Contralateral Breast Cancer Among Young Women Using Tangential Fields and Different Modes of Breathing

    Energy Technology Data Exchange (ETDEWEB)

    Zurl, Brigitte, E-mail: brigitte.zurl@klinikum-graz.at [Department of Therapeutic Radiology and Oncology, Medical University of Graz (Austria); Stranzl, Heidi; Winkler, Peter; Kapp, Karin Sigrid [Department of Therapeutic Radiology and Oncology, Medical University of Graz (Austria)

    2013-02-01

    Purpose: Whole breast irradiation with deep-inspiration breath-hold (DIBH) technique among left-sided breast cancer patients significantly reduces cardiac irradiation; however, a potential disadvantage is increased incidental irradiation of the contralateral breast. Methods and Materials: Contralateral breast dose (CBD) was calculated by comparing 400 treatment plans of 200 left-sided breast cancer patients whose tangential fields had been planned on gated and nongated CT data sets. Various anatomic and field parameters were analyzed for their impact on CBD. For a subgroup of patients (aged {<=}45 years) second cancer risk in the contralateral breast (CB) was modeled by applying the linear quadratic model, compound models, and compound models considering dose-volume information (DVH). Results: The mean CBD was significantly higher in DIBH with 0.69 Gy compared with 0.65 Gy in normal breathing (P=.01). The greatest impact on CBD was due to a shift of the inner field margin toward the CB in DIBH (mean 0.4 cm; range, 0-2), followed by field size in magnitude. Calculation with different risk models for CBC revealed values of excess relative risk/Gy ranging from 0.48-0.65 vs 0.46-0.61 for DIBH vs normal breathing, respectively. Conclusion: Contralateral breast dose, although within a low dose range, was mildly but significantly increased in 200 treatment plans generated under gated conditions, predominately due to a shift in the medial field margin. Risk modeling for CBC among women aged {<=}45 years also pointed to a higher risk when comparing DIBH with normal breathing. This risk, however, was substantially lower in the model considering DVH information. We think that clinical decisions should not be affected by this small increase in CBD with DIBH because DIBH is effective in reducing the dose to the heart in all patients.

  18. Transient gibberellin application promotes Arabidopsis thaliana hypocotyl cell elongation without maintaining transverse orientation of microtubules on the outer tangential wall of epidermal cells

    KAUST Repository

    Sauret-Güeto, Susanna

    2011-11-25

    The phytohormone gibberellin (GA) promotes plant growth by stimulating cellular expansion. Whilst it is known that GA acts by opposing the growth-repressing effects of DELLA proteins, it is not known how these events promote cellular expansion. Here we present a time-lapse analysis of the effects of a single pulse of GA on the growth of Arabidopsis hypocotyls. Our analyses permit kinetic resolution of the transient growth effects of GA on expanding cells. We show that pulsed application of GA to the relatively slowly growing cells of the unexpanded light-grown Arabidopsis hypocotyl results in a transient burst of anisotropic cellular growth. This burst, and the subsequent restoration of initial cellular elongation rates, occurred respectively following the degradation and subsequent reappearance of a GFP-tagged DELLA (GFP-RGA). In addition, we used a GFP-tagged α-tubulin 6 (GFP-TUA6) to visualise the behaviour of microtubules (MTs) on the outer tangential wall (OTW) of epidermal cells. In contrast to some current hypotheses concerning the effect of GA on MTs, we show that the GA-induced boost of hypocotyl cell elongation rate is not dependent upon the maintenance of transverse orientation of the OTW MTs. This confirms that transverse alignment of outer face MTs is not necessary to maintain rapid elongation rates of light-grown hypocotyls. Together with future studies on MT dynamics in other faces of epidermal cells and in cells deeper within the hypocotyl, our observations advance understanding of the mechanisms by which GA promotes plant cell and organ growth. © 2011 Blackwell Publishing Ltd.

  19. Some Finite Sample Properties and Assumptions of Methods for Determining Treatment Effects

    DEFF Research Database (Denmark)

    Petrovski, Erik

    2016-01-01

    of this paper, three popular methods for determining treatment effects were chosen: ordinary least squares regression, propensity score matching, and inverse probability weighting. The assumptions and properties tested across these methods are: unconfoundedness, differences in average treatment effects......There is a growing interest in determining the exact effects of policies, programs, and other social interventions within the social sciences. In order to do so, researchers have a variety of econometric techniques at their disposal. However, the choice between them may be obscure. In this paper, I...... will compare assumptions and properties of select methods for determining treatment effects with Monte Carlo simulation. The comparison will highlight the pros and cons of using one method over another and the assumptions that researchers need to make for the method they choose. To limit the scope...

  20. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    Science.gov (United States)

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  1. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    Science.gov (United States)

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Technoeconomic assumptions adopted for the development of a long-term electricity supply model for Cyprus.

    Science.gov (United States)

    Taliotis, Constantinos; Taibi, Emanuele; Howells, Mark; Rogner, Holger; Bazilian, Morgan; Welsch, Manuel

    2017-10-01

    The generation mix of Cyprus has been dominated by oil products for decades. In order to conform with European Union and international legislation, a transformation of the supply system is called for. Energy system models can facilitate energy planning into the future, but a large volume of data is required to populate such models. The present data article provides information on key modelling assumptions and input data adopted with the aim of representing the electricity supply system of Cyprus in a separate research article. Data in regards to renewable energy technoeconomic characteristics and investment cost projections, fossil fuel price projections, storage technology characteristics and system operation assumptions are described in this article.

  3. Clarification of assumptions in the relationship between the Bayes Decision Rule and the whitened cosine similarity measure.

    Science.gov (United States)

    Liu, Chengjun

    2008-06-01

    This paper first clarifies Assumption 3 (which misses a constant) and Assumption 4 (where the whitened pattern vectors refer to the whitened means) in paper "The Bayes Decision Rule Induced Similarity Measures" (IEEE Transactions on Pattern Analysis and Machine Intelligence), vol. 29, no. 6, pp. 1086-1090, 2007), and then provides examples to show that the assumptions after the clarification are consistent.

  4. What Mathematics Education Can Learn from Art: The Assumptions, Values, and Vision of Mathematics Education

    Science.gov (United States)

    Dietiker, Leslie

    2015-01-01

    Elliot Eisner proposed that educational challenges can be met by applying an artful lens. This article draws from Eisner's proposal to consider the assumptions, values, and vision of mathematics education by theorizing mathematics curriculum as an art form. By conceptualizing mathematics curriculum (both in written and enacted forms) as stories…

  5. Letters: Milk and Mortality : Study used wrong assumption about galactose content of fermented dairy products

    NARCIS (Netherlands)

    Hettinga, K.A.

    2014-01-01

    Michaëlsson and colleagues’ proposed mechanism for the effect of milk intake on the risk of mortality and fractures is based on the assumption that fermented dairy products (which had the opposite effects to those of non-fermented milk) are free of galactose.1 For most fermented dairy products,

  6. 76 FR 29675 - Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian Country

    Science.gov (United States)

    2011-05-23

    ... Part 50 RIN 1105-AB38 Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian... State criminal jurisdiction under Public Law 280 (18 U.S.C. 1162(a)) to request that the United States accept concurrent criminal jurisdiction within the tribe's Indian country, and for the Attorney General...

  7. Mutual assumptions and facts about nondisclosure among clinical supervisors and students in group supervision

    DEFF Research Database (Denmark)

    Nielsen, Geir Høstmark; Skjerve, Jan; Jacobsen, Claus Haugaard

    2009-01-01

    In the two preceding papers of this issue of Nordic Psychology the authors report findings from a study of nondisclosure among student therapists and clinical supervisors. The findings were reported separately for each group. In this article, the two sets of findings are held together and compared......, so as to draw a picture of mutual assumptions and facts about nondisclosure among students and supervisors....

  8. Sensitivity Analysis and Bounding of Causal Effects with Alternative Identifying Assumptions

    Science.gov (United States)

    Jo, Booil; Vinokur, Amiram D.

    2011-01-01

    When identification of causal effects relies on untestable assumptions regarding nonidentified parameters, sensitivity of causal effect estimates is often questioned. For proper interpretation of causal effect estimates in this situation, deriving bounds on causal parameters or exploring the sensitivity of estimates to scientifically plausible…

  9. Metaphorical Mirror: Reflecting on Our Personal Pursuits to Discover and Challenge Our Teaching Practice Assumptions

    Science.gov (United States)

    Wagenheim, Gary; Clark, Robert; Crispo, Alexander W.

    2009-01-01

    The goal of this paper is to examine how our personal pursuits--hobbies, activities, interests, and sports--can serve as a metaphor to reflect who we are in our teaching practice. This paper explores the notion that our favorite personal pursuits serve as metaphorical mirrors to reveal deeper assumptions we hold about the skills, values, and…

  10. 76 FR 21252 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Paying Benefits

    Science.gov (United States)

    2011-04-15

    ... covered by title IV of the Employee Retirement Income Security Act of 1974. DATES: Effective May 1, 2011...--for paying plan benefits under terminating single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. PBGC uses the interest assumptions in Appendix B to Part 4022 to...

  11. 76 FR 81966 - Agency Information Collection Activities; Proposed Collection; Comments Requested; Assumption of...

    Science.gov (United States)

    2011-12-29

    ... Indian country is subject to State criminal jurisdiction under Public Law 280 (18 U.S.C. 1162(a)) to... Collection; Comments Requested; Assumption of Concurrent Federal Criminal Jurisdiction in Certain Areas of Indian Country ACTION: 60-Day notice of information collection under review. The Department of Justice...

  12. 77 FR 75549 - Allocation of Assets in Single-Employer Plans; Interest Assumptions for Valuing Benefits

    Science.gov (United States)

    2012-12-21

    ... Plans to prescribe interest assumptions for valuation dates in the first quarter of 2013. The interest... plan benefits under terminating single-employer plans covered by title IV of the Employee Retirement... regulation are updated quarterly and are intended to reflect current conditions in the financial and annuity...

  13. Common-Sense Chemistry: The Use of Assumptions and Heuristics in Problem Solving

    Science.gov (United States)

    Maeyer, Jenine Rachel

    2013-01-01

    Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build…

  14. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  15. A computational model to investigate assumptions in the headturn preference procedure

    NARCIS (Netherlands)

    Bergmann, C.; Bosch, L.F.M. ten; Fikkert, J.P.M.; Boves, L.W.J.

    2013-01-01

    In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP): (1) behavioral differences originate in different processing; (2)

  16. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid

  17. Measuring oblique incidence sound absorption using a local plane wave assumption

    NARCIS (Netherlands)

    Kuipers, E.R.; Wijnant, Ysbrand H.; de Boer, Andries

    2014-01-01

    In this paper a method for the measurement of the oblique incidence sound absorption coefficient is presented. It is based on a local field assumption, in which the acoustic field is locally approximated by one incident- and one specularly reflected plane wave. The amplitudes of these waves can be

  18. Vocational Didactics: Core Assumptions and Approaches from Denmark, Germany, Norway, Spain and Sweden

    Science.gov (United States)

    Gessler, Michael; Moreno Herrera, Lázaro

    2015-01-01

    The design of vocational didactics has to meet special requirements. Six core assumptions are identified: outcome orientation, cultural-historical embedding, horizontal structure, vertical structure, temporal structure, and the changing nature of work. Different approaches and discussions from school-based systems (Spain and Sweden) and dual…

  19. A method for the analysis of assumptions in model-based environmental assessments

    NARCIS (Netherlands)

    Kloprogge, P.; van der Sluijs, J.P.; Petersen, A.C.

    2011-01-01

    make many assumptions. This inevitably involves – to some degree – subjective judgements by the analysts. Although the potential value-ladenness of model-based assessments has been extensively problematized in literature, this has not so far led to a systematic strategy for analyzing this

  20. What's Love Got to Do with It? Rethinking Common Sense Assumptions

    Science.gov (United States)

    Trachman, Matthew; Bluestone, Cheryl

    2005-01-01

    One of the most basic tasks in introductory social science classes is to get students to reexamine their common sense assumptions concerning human behavior. This article introduces a shared assignment developed for a learning community that paired an introductory sociology and psychology class. The assignment challenges students to rethink the…

  1. How much confidence do we need in animal experiments? Statistical assumptions in sample size estimation.

    Science.gov (United States)

    Richter, Veronika; Muche, Rainer; Mayer, Benjamin

    2018-01-26

    Statistical sample size calculation is a crucial part of planning nonhuman animal experiments in basic medical research. The 3R principle intends to reduce the number of animals to a sufficient minimum. When planning experiments, one may consider the impact of less rigorous assumptions during sample size determination as it might result in a considerable reduction in the number of required animals. Sample size calculations conducted for 111 biometrical reports were repeated. The original effect size assumptions remained unchanged, but the basic properties (type 1 error 5%, two-sided hypothesis, 80% power) were varied. The analyses showed that a less rigorous assumption on the type 1 error level (one-sided 5% instead of two-sided 5%) was associated with a savings potential of 14% regarding the original number of required animals. Animal experiments are predominantly exploratory studies. In light of the demonstrated potential reduction in the numbers of required animals, researchers should discuss whether less rigorous assumptions during the process of sample size calculation may be reasonable for the purpose of optimizing the number of animals in experiments according to the 3R principle.

  2. Is a "Complex" Task Really Complex? Validating the Assumption of Cognitive Task Complexity

    Science.gov (United States)

    Sasayama, Shoko

    2016-01-01

    In research on task-based learning and teaching, it has traditionally been assumed that differing degrees of cognitive task complexity can be inferred through task design and/or observations of differing qualities in linguistic production elicited by second language (L2) communication tasks. Without validating this assumption, however, it is…

  3. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  4. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  5. Post Stereotypes: Deconstructing Racial Assumptions and Biases through Visual Culture and Confrontational Pedagogy

    Science.gov (United States)

    Jung, Yuha

    2015-01-01

    The Post Stereotypes project embodies confrontational pedagogy and involves postcard artmaking designed to both solicit expression of and deconstruct students' racial, ethnic, and cultural stereotypes and assumptions. As part of the Cultural Diversity in American Art course, students created postcard art that visually represented their personal…

  6. Tangential Field Radiotherapy for Breast Cancer—The Dose to the Heart and Heart Subvolumes: What Structures Must Be Contoured in Future Clinical Trials?

    Directory of Open Access Journals (Sweden)

    Marciana Nona Duma

    2017-06-01

    Full Text Available Background and purposeThe aim of the present study was to evaluate if it is feasible for experienced radiation oncologists to visually sort out patients with a large dose to the heart. This would facilitate large retrospective data evaluations. And in case of an insufficient visual assessment, to define which structures should be contoured and which structures can be skipped as their dose can be derived from other easily contoured structures for future clinical trials.Material and methodsPlanning CTs of left-sided breast cancer patients treated with 3D-conformal radiotherapy by tangential fields were visually divided into two groups: with an estimated high dose (HiD and with an estimated low dose (LoD to the heart. For 46 patients (22 HiD and 24 LoD, the heart, the left ventricle, the left anterior descending artery (LAD, the right coronary artery, and the ramus circumflexus were contoured. A helper structure (HS around the LAD was generated in order to consider if contouring uncertainties of the LAD could be acceptable. We analyzed the mean dose (Dmean, the maximum dose, the V10, V20, V30, V40, and the length of the LAD that received 20 and 40 Gy.ResultsThe two groups had a significant different Dmean of the heart (p < 0.001. The average Dmean to the heart was 4.0 ± 1.3 Gy (HiD and 2.3 ± 0.8 Gy (LoD. The average Dmean to the LAD was 26.2 ± 7.4 Gy (HiD and 13.0 ± 7.5Gy (LoD with a very strong positive correlation between Dmean LAD and Dmean HS in both groups. The Dmean heart is not a good surrogate parameter for the dose to the LAD since it might underestimate clinically significant doses in 1/3 of the patients in LoD group.ConclusionA visual assessment of the dose to the heart could be reliable if performed by experienced radiation oncologists. However, the Dmean heart is not always a good surrogate parameter for the dose to the LAD or for the Dmean to the left ventricle. Thus, if specific late toxicities are

  7. Common-sense chemistry: The use of assumptions and heuristics in problem solving

    Science.gov (United States)

    Maeyer, Jenine Rachel

    Students experience difficulty learning and understanding chemistry at higher levels, often because of cognitive biases stemming from common sense reasoning constraints. These constraints can be divided into two categories: assumptions (beliefs held about the world around us) and heuristics (the reasoning strategies or rules used to build predictions and make decisions). A better understanding and characterization of these constraints are of central importance in the development of curriculum and teaching strategies that better support student learning in science. It was the overall goal of this thesis to investigate student reasoning in chemistry, specifically to better understand and characterize the assumptions and heuristics used by undergraduate chemistry students. To achieve this, two mixed-methods studies were conducted, each with quantitative data collected using a questionnaire and qualitative data gathered through semi-structured interviews. The first project investigated the reasoning heuristics used when ranking chemical substances based on the relative value of a physical or chemical property, while the second study characterized the assumptions and heuristics used when making predictions about the relative likelihood of different types of chemical processes. Our results revealed that heuristics for cue selection and decision-making played a significant role in the construction of answers during the interviews. Many study participants relied frequently on one or more of the following heuristics to make their decisions: recognition, representativeness, one-reason decision-making, and arbitrary trend. These heuristics allowed students to generate answers in the absence of requisite knowledge, but often led students astray. When characterizing assumptions, our results indicate that students relied on intuitive, spurious, and valid assumptions about the nature of chemical substances and processes in building their responses. In particular, many

  8. Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies

    Energy Technology Data Exchange (ETDEWEB)

    Stoll, Brady [National Renewable Energy Lab. (NREL), Golden, CO (United States); Brinkman, Gregory [National Renewable Energy Lab. (NREL), Golden, CO (United States); Townsend, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bloom, Aaron [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-01

    Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small system and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0

  9. CHILDREN'S EDUCATION IN THE REGULAR NATIONAL BASIS: ASSUMPTIONS AND INTERFACES WITH PHYSICAL EDUCATION

    Directory of Open Access Journals (Sweden)

    André da Silva Mello

    2016-09-01

    Full Text Available This paper aims at discussing the Children's Education organization within the Regular Curricular National Basis (BNCC, focusing on the permanencies and advances taking in relation to the precedent documents, and analyzing the presence of Physical Education in Children's Education from the assumptions that guide the Base, in interface with researches about pedagogical experiences with this field of knowledge. To do so, it carries out a documental-bibliographic analysis, using as sources the BNCC, the National Curricular Referential for Children's Education, the National Curricular Guidelines for Children's Education and academic-scientific productions belonging to the Physical Education area that approach Children's Education. In the analysis process, the work establishes categories which allow the interlocution among different sources used in this study. Data analyzed offers indications that the assumption present in the BNCC dialogue, not explicitly, with the movements of the curricular component and with the Physical Education academic-scientific production regarding Children's Education.

  10. Random Regression Models Based On The Skew Elliptically Contoured Distribution Assumptions With Applications To Longitudinal Data *

    Science.gov (United States)

    Zheng, Shimin; Rao, Uma; Bartolucci, Alfred A.; Singh, Karan P.

    2011-01-01

    Bartolucci et al.(2003) extended the distribution assumption from the normal (Lyles et al., 2000) to the elliptical contoured distribution (ECD) for random regression models used in analysis of longitudinal data accounting for both undetectable values and informative drop-outs. In this paper, the random regression models are constructed on the multivariate skew ECD. A real data set is used to illustrate that the skew ECDs can fit some unimodal continuous data better than the Gaussian distributions or more general continuous symmetric distributions when the symmetric distribution assumption is violated. Also, a simulation study is done for illustrating the model fitness from a variety of skew ECDs. The software we used is SAS/STAT, V. 9.13. PMID:21637734

  11. Investigation of assumptions underlying current safety guidelines on EM-induced nerve stimulation

    Science.gov (United States)

    Neufeld, Esra; Vogiatzis Oikonomidis, Ioannis; Iacono, Maria Ida; Angelone, Leonardo M.; Kainz, Wolfgang; Kuster, Niels

    2016-06-01

    An intricate network of a variety of nerves is embedded within the complex anatomy of the human body. Although nerves are shielded from unwanted excitation, they can still be stimulated by external electromagnetic sources that induce strongly non-uniform field distributions. Current exposure safety standards designed to limit unwanted nerve stimulation are based on a series of explicit and implicit assumptions and simplifications. This paper demonstrates the applicability of functionalized anatomical phantoms with integrated coupled electromagnetic and neuronal dynamics solvers for investigating the impact of magnetic resonance exposure on nerve excitation within the full complexity of the human anatomy. The impact of neuronal dynamics models, temperature and local hot-spots, nerve trajectory and potential smoothing, anatomical inhomogeneity, and pulse duration on nerve stimulation was evaluated. As a result, multiple assumptions underlying current safety standards are questioned. It is demonstrated that coupled EM-neuronal dynamics modeling involving realistic anatomies is valuable to establish conservative safety criteria.

  12. Agenda dissonance: immigrant Hispanic women's and providers' assumptions and expectations for menopause healthcare.

    Science.gov (United States)

    Esposito, Noreen

    2005-02-01

    This focus group study examined immigrant Hispanic women's and providers' assumptions about and expectations of healthcare encounters in the context of menopause. Four groups of immigrant women from Central America and one group of healthcare providers were interviewed in Spanish and English, respectively. The women wanted provider-initiated, individualized anticipatory guidance about menopause, acknowledgement of their symptoms, and mainstream medical treatment for disruptive symptoms. Providers believed that menopause was an unimportant health issue for immigrant women and was overshadowed by concerns about high-risk medical problems, such as diabetes, heart disease and HIV prevention. The women expected a healthcare encounter to be patient centered, social, and complete in itself. Providers expected an encounter to be businesslike and one part of multiple visit care. Language and lack of time were barriers cited by all. Dissonance between patient-provider assumptions and expectations around issues of healthcare leads to missed opportunities for care.

  13. Fair-sampling assumption is not necessary for testing local realism

    International Nuclear Information System (INIS)

    Berry, Dominic W.; Jeong, Hyunseok; Stobinska, Magdalena; Ralph, Timothy C.

    2010-01-01

    Almost all Bell inequality experiments to date have used postselection and therefore relied on the fair sampling assumption for their interpretation. The standard form of the fair sampling assumption is that the loss is independent of the measurement settings, so the ensemble of detected systems provides a fair statistical sample of the total ensemble. This is often assumed to be needed to interpret Bell inequality experiments as ruling out hidden-variable theories. Here we show that it is not necessary; the loss can depend on measurement settings, provided the detection efficiency factorizes as a function of the measurement settings and any hidden variable. This condition implies that Tsirelson's bound must be satisfied for entangled states. On the other hand, we show that it is possible for Tsirelson's bound to be violated while the Clauser-Horne-Shimony-Holt (CHSH)-Bell inequality still holds for unentangled states, and present an experimentally feasible example.

  14. Load assumption for fatigue design of structures and components counting methods, safety aspects, practical application

    CERN Document Server

    Köhler, Michael; Pötter, Kurt; Zenner, Harald

    2017-01-01

    Understanding the fatigue behaviour of structural components under variable load amplitude is an essential prerequisite for safe and reliable light-weight design. For designing and dimensioning, the expected stress (load) is compared with the capacity to withstand loads (fatigue strength). In this process, the safety necessary for each particular application must be ensured. A prerequisite for ensuring the required fatigue strength is a reliable load assumption. The authors describe the transformation of the stress- and load-time functions which have been measured under operational conditions to spectra or matrices with the application of counting methods. The aspects which must be considered for ensuring a reliable load assumption for designing and dimensioning are discussed in detail. Furthermore, the theoretical background for estimating the fatigue life of structural components is explained, and the procedures are discussed for numerous applications in practice. One of the prime intentions of the authors ...

  15. The Sexual Victimization of Men in America: New Data Challenge Old Assumptions

    Science.gov (United States)

    Meyer, Ilan H.

    2014-01-01

    We assessed 12-month prevalence and incidence data on sexual victimization in 5 federal surveys that the Bureau of Justice Statistics, the Centers for Disease Control and Prevention, and the Federal Bureau of Investigation conducted independently in 2010 through 2012. We used these data to examine the prevailing assumption that men rarely experience sexual victimization. We concluded that federal surveys detect a high prevalence of sexual victimization among men—in many circumstances similar to the prevalence found among women. We identified factors that perpetuate misperceptions about men’s sexual victimization: reliance on traditional gender stereotypes, outdated and inconsistent definitions, and methodological sampling biases that exclude inmates. We recommend changes that move beyond regressive gender assumptions, which can harm both women and men. PMID:24825225

  16. Scenario Analysis In The Calculation Of Investment Efficiency–The Problem Of Formulating Assumptions

    Directory of Open Access Journals (Sweden)

    Dittmann Iwona

    2015-09-01

    Full Text Available This article concerns the problem of formulating assumptions in scenario analysis for investments which consist of the renting out of an apartment. The article attempts to indicate the foundations for the formulation of assumptions on the basis of observed retrospective regularities. It includes theoretical considerations regarding scenario design, as well as the results of studies on the formulation, in the past, of quantities which determined or were likely to bring closer estimate the value of the individual explanatory variables for a chosen measure of investment profitability (MIRRFCFE. The dynamics of and correlation between the variables were studied. The research was based on quarterly data from local residential real estate markets in Poland (in the six largest cities in the years 2006 – 2014, as well as on data from the financial market.

  17. Belonging and women entrepreneurs:women's navigation of gendered assumptions in entrepreneurial practice

    OpenAIRE

    Stead, Valerie

    2017-01-01

    This article is novel in proposing belonging as a mediatory and explanatory concept to better understand the relationship between women entrepreneurs and socially embedded gendered assumptions in entrepreneurial practice. Drawing on social theories of belonging and extant entrepreneurial literature, the article explores what belonging involves for women in the entrepreneurial context to offer a conceptualisation of entrepreneurial belonging as relational, dynamic, gendered and in continual ac...

  18. Heterosexual assumptions in verbal and non-verbal communication in nursing.

    Science.gov (United States)

    Röndahl, Gerd; Innala, Sune; Carlsson, Marianne

    2006-11-01

    This paper reports a study of what lesbian women and gay men had to say, as patients and as partners, about their experiences of nursing in hospital care, and what they regarded as important to communicate about homosexuality and nursing. The social life of heterosexual cultures is based on the assumption that all people are heterosexual, thereby making homosexuality socially invisible. Nurses may assume that all patients and significant others are heterosexual, and these heteronormative assumptions may lead to poor communication that affects nursing quality by leading nurses to ask the wrong questions and make incorrect judgements. A qualitative interview study was carried out in the spring of 2004. Seventeen women and 10 men ranging in age from 23 to 65 years from different parts of Sweden participated. They described 46 experiences as patients and 31 as partners. Heteronormativity was communicated in waiting rooms, in patient documents and when registering for admission, and nursing staff sometimes showed perplexity when an informant deviated from this heteronormative assumption. Informants had often met nursing staff who showed fear of behaving incorrectly, which could lead to a sense of insecurity, thereby impeding further communication. As partners of gay patients, informants felt that they had to deal with heterosexual assumptions more than they did when they were patients, and the consequences were feelings of not being accepted as a 'true' relative, of exclusion and neglect. Almost all participants offered recommendations about how nursing staff could facilitate communication. Heterosexual norms communicated unconsciously by nursing staff contribute to ambivalent attitudes and feelings of insecurity that prevent communication and easily lead to misconceptions. Educational and management interventions, as well as increased communication, could make gay people more visible and thereby encourage openness and awareness by hospital staff of the norms that they

  19. Rethinking the going concern assumption as a pre–condition for accounting measurement

    OpenAIRE

    Saratiel Wedzerai Musvoto; Gouws, Daan G

    2011-01-01

    This study compares the principles of the going concern concept against the principles of representational measurement to determine if it is possible to establish foundations of accounting measurement with the going concern concept as a precondition. Representational measurement theory is a theory that establishes measurement in social scientific disciplines such as accounting. The going concern assumption is prescribed as one of the preconditions for measuring the attributes of the elements ...

  20. Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions

    Science.gov (United States)

    Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.

    2015-01-01

    Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.

  1. Clinical review: Moral assumptions and the process of organ donation in the intensive care unit

    OpenAIRE

    Streat, Stephen

    2004-01-01

    The objective of the present article is to review moral assumptions underlying organ donation in the intensive care unit. Data sources used include personal experience, and a Medline search and a non-Medline search of relevant English-language literature. The study selection included articles concerning organ donation. All data were extracted and analysed by the author. In terms of data synthesis, a rational, utilitarian moral perspective dominates, and has captured and circumscribed, the lan...

  2. Testing legal assumptions regarding the effects of dancer nudity and proximity to patron on erotic expression.

    Science.gov (United States)

    Linz, D; Blumenthal, E; Donnerstein, E; Kunkel, D; Shafer, B J; Lichtenstein, A

    2000-10-01

    A field experiment was conducted in order to test the assumptions by the Supreme Court in Barnes v. Glen Theatre, Inc. (1991) and the Ninth Circuit Court of Appeals in Colacurcio v. City of Kent (1999) that government restrictions on dancer nudity and dancer-patron proximity do not affect the content of messages conveyed by erotic dancers. A field experiment was conducted in which dancer nudity (nude vs. partial clothing) and dancer-patron proximity (4 feet; 6 in.; 6 in. plus touch) were manipulated under controlled conditions in an adult night club. After male patrons viewed the dances, they completed questionnaires assessing affective states and reception of erotic, relational intimacy, and social messages. Contrary to the assumptions of the courts, the results showed that the content of messages conveyed by the dancers was significantly altered by restrictions placed on dancer nudity and dancer-patron proximity. These findings are interpreted in terms of social psychological responses to nudity and communication theories of nonverbal behavior. The legal implications of rejecting the assumptions made by the courts in light of the findings of this study are discussed. Finally, suggestions are made for future research.

  3. Oil production, oil prices, and macroeconomic adjustment under different wage assumptions

    International Nuclear Information System (INIS)

    Harvie, C.; Maleka, P.T.

    1992-01-01

    In a previous paper one of the authors developed a simple model to try to identify the possible macroeconomic adjustment processes arising in an economy experiencing a temporary period of oil production, under alternative wage adjustment assumptions, namely nominal and real wage rigidity. Certain assumptions were made regarding the characteristics of actual production, the permanent revenues generated from that oil production, and the net exports/imports of oil. The role of the price of oil, and possible changes in that price was essentially ignored. Here we attempt to incorporate the price of oil, as well as changes in that price, in conjunction with the production of oil, the objective being to identify the contribution which the price of oil, and changes in it, make to the adjustment process itself. The emphasis in this paper is not given to a mathematical derivation and analysis of the model's dynamics of adjustment or its comparative statics, but rather to the derivation of simulation results from the model, for a specific assumed case, using a numerical algorithm program, conducive to the type of theoretical framework utilized here. The results presented suggest that although the adjustment profiles of the macroeconomic variables of interest, for either wage adjustment assumption, remain fundamentally the same, the magnitude of these adjustments is increased. Hence to derive a more accurate picture of the dimensions of adjustment of these macroeconomic variables, it is essential to include the price of oil as well as changes in that price. (Author)

  4. Validity of the isotropic thermal conductivity assumption in supercell lattice dynamics

    Science.gov (United States)

    Ma, Ruiyuan; Lukes, Jennifer R.

    2018-02-01

    Superlattices and nano phononic crystals have attracted significant attention due to their low thermal conductivities and their potential application as thermoelectric materials. A widely used expression to calculate thermal conductivity, presented by Klemens and expressed in terms of the relaxation time by Callaway and Holland, originates from the Boltzmann transport equation. In its most general form, this expression involves a direct summation of the heat current contributions from individual phonons of all wavevectors and polarizations in the first Brillouin zone. In common practice, the expression is simplified by making an isotropic assumption that converts the summation over wavevector to an integral over wavevector magnitude. The isotropic expression has been applied to superlattices and phononic crystals, but its validity for different supercell sizes has not been studied. In this work, the isotropic and direct summation methods are used to calculate the thermal conductivities of bulk Si, and Si/Ge quantum dot superlattices. The results show that the differences between the two methods increase substantially with the supercell size. These differences arise because the vibrational modes neglected in the isotropic assumption provide an increasingly important contribution to the thermal conductivity for larger supercells. To avoid the significant errors that can result from the isotropic assumption, direct summation is recommended for thermal conductivity calculations in superstructures.

  5. Questioning the "big assumptions". Part I: addressing personal contradictions that impede professional development.

    Science.gov (United States)

    Bowe, Constance M; Lahey, Lisa; Armstrong, Elizabeth; Kegan, Robert

    2003-08-01

    The ultimate success of recent medical curriculum reforms is, in large part, dependent upon the faculty's ability to adopt and sustain new attitudes and behaviors. However, like many New Year's resolutions, sincere intent to change may be short lived and followed by a discouraging return to old behaviors. Failure to sustain the initial resolve to change can be misinterpreted as a lack of commitment to one's original goals and eventually lead to greater effort expended in rationalizing the status quo rather than changing it. The present article outlines how a transformative process that has proven to be effective in managing personal change, Questioning the Big Assumptions, was successfully used in an international faculty development program for medical educators to enhance individual personal satisfaction and professional effectiveness. This process systematically encouraged participants to explore and proactively address currently operative mechanisms that could stall their attempts to change at the professional level. The applications of the Big Assumptions process in faculty development helped individuals to recognize and subsequently utilize unchallenged and deep rooted personal beliefs to overcome unconscious resistance to change. This approach systematically led participants away from circular griping about what was not right in their current situation to identifying the actions that they needed to take to realize their individual goals. By thoughtful testing of personal Big Assumptions, participants designed behavioral changes that could be broadly supported and, most importantly, sustained.

  6. An optical flow algorithm based on gradient constancy assumption for PIV image processing

    Science.gov (United States)

    Zhong, Qianglong; Yang, Hua; Yin, Zhouping

    2017-05-01

    Particle image velocimetry (PIV) has matured as a flow measurement technique. It enables the description of the instantaneous velocity field of the flow by analyzing the particle motion obtained from digitally recorded images. Correlation based PIV evaluation technique is widely used because of its good accuracy and robustness. Although very successful, correlation PIV technique has some weakness which can be avoided by optical flow based PIV algorithms. At present, most of the optical flow methods applied to PIV are based on brightness constancy assumption. However, some factors of flow imaging technology and the nature property of the fluids make the brightness constancy assumption less appropriate in real PIV cases. In this paper, an implementation of a 2D optical flow algorithm (GCOF) based on gradient constancy assumption is introduced. The proposed GCOF assumes the edges of the illuminated PIV particles are constant during motion. It comprises two terms: a combined local-global gradient data term and a first-order divergence and vorticity smooth term. The approach can provide accurate dense motion fields. The approach are tested on synthetic images and on two experimental flows. The comparison of GCOF with other optical flow algorithms indicates the proposed method is more accurate especially in conditions of illumination variation. The comparison of GCOF with correlation PIV technique shows that the proposed GCOF has advantages on preserving small divergence and vorticity structures of the motion field and getting less outliers. As a consequence, the GCOF acquire a more accurate and better topological description of the turbulent flow.

  7. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Directory of Open Access Journals (Sweden)

    Giordano James

    2010-01-01

    Full Text Available Abstract A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order, and what counts as abnormality (i.e.- disorder. The distinction(s between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice.

  8. On the ontological assumptions of the medical model of psychiatry: philosophical considerations and pragmatic tasks

    Science.gov (United States)

    2010-01-01

    A common theme in the contemporary medical model of psychiatry is that pathophysiological processes are centrally involved in the explanation, evaluation, and treatment of mental illnesses. Implied in this perspective is that clinical descriptors of these pathophysiological processes are sufficient to distinguish underlying etiologies. Psychiatric classification requires differentiation between what counts as normality (i.e.- order), and what counts as abnormality (i.e.- disorder). The distinction(s) between normality and pathology entail assumptions that are often deeply presupposed, manifesting themselves in statements about what mental disorders are. In this paper, we explicate that realism, naturalism, reductionism, and essentialism are core ontological assumptions of the medical model of psychiatry. We argue that while naturalism, realism, and reductionism can be reconciled with advances in contemporary neuroscience, essentialism - as defined to date - may be conceptually problematic, and we pose an eidetic construct of bio-psychosocial order and disorder based upon complex systems' dynamics. However we also caution against the overuse of any theory, and claim that practical distinctions are important to the establishment of clinical thresholds. We opine that as we move ahead toward both a new edition of the Diagnostic and Statistical Manual, and a proposed Decade of the Mind, the task at hand is to re-visit nosologic and ontologic assumptions pursuant to a re-formulation of diagnostic criteria and practice. PMID:20109176

  9. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning

    Science.gov (United States)

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning. PMID:27310576

  10. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Science.gov (United States)

    Hsu, Anne; Griffiths, Thomas L

    2016-01-01

    A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  11. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    Science.gov (United States)

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  12. Sampling Assumptions Affect Use of Indirect Negative Evidence in Language Learning.

    Directory of Open Access Journals (Sweden)

    Anne Hsu

    Full Text Available A classic debate in cognitive science revolves around understanding how children learn complex linguistic patterns, such as restrictions on verb alternations and contractions, without negative evidence. Recently, probabilistic models of language learning have been applied to this problem, framing it as a statistical inference from a random sample of sentences. These probabilistic models predict that learners should be sensitive to the way in which sentences are sampled. There are two main types of sampling assumptions that can operate in language learning: strong and weak sampling. Strong sampling, as assumed by probabilistic models, assumes the learning input is drawn from a distribution of grammatical samples from the underlying language and aims to learn this distribution. Thus, under strong sampling, the absence of a sentence construction from the input provides evidence that it has low or zero probability of grammaticality. Weak sampling does not make assumptions about the distribution from which the input is drawn, and thus the absence of a construction from the input as not used as evidence of its ungrammaticality. We demonstrate in a series of artificial language learning experiments that adults can produce behavior consistent with both sets of sampling assumptions, depending on how the learning problem is presented. These results suggest that people use information about the way in which linguistic input is sampled to guide their learning.

  13. An optical flow algorithm based on gradient constancy assumption for PIV image processing

    International Nuclear Information System (INIS)

    Zhong, Qianglong; Yang, Hua; Yin, Zhouping

    2017-01-01

    Particle image velocimetry (PIV) has matured as a flow measurement technique. It enables the description of the instantaneous velocity field of the flow by analyzing the particle motion obtained from digitally recorded images. Correlation based PIV evaluation technique is widely used because of its good accuracy and robustness. Although very successful, correlation PIV technique has some weakness which can be avoided by optical flow based PIV algorithms. At present, most of the optical flow methods applied to PIV are based on brightness constancy assumption. However, some factors of flow imaging technology and the nature property of the fluids make the brightness constancy assumption less appropriate in real PIV cases. In this paper, an implementation of a 2D optical flow algorithm (GCOF) based on gradient constancy assumption is introduced. The proposed GCOF assumes the edges of the illuminated PIV particles are constant during motion. It comprises two terms: a combined local-global gradient data term and a first-order divergence and vorticity smooth term. The approach can provide accurate dense motion fields. The approach are tested on synthetic images and on two experimental flows. The comparison of GCOF with other optical flow algorithms indicates the proposed method is more accurate especially in conditions of illumination variation. The comparison of GCOF with correlation PIV technique shows that the proposed GCOF has advantages on preserving small divergence and vorticity structures of the motion field and getting less outliers. As a consequence, the GCOF acquire a more accurate and better topological description of the turbulent flow. (paper)

  14. Validation of quasi-induced exposure representativeness assumption among young drivers.

    Science.gov (United States)

    Curry, Allison E; Pfeiffer, Melissa R; Elliott, Michael R

    2016-05-18

    Young driver studies have applied quasi-induced exposure (QIE) methods to assess relationships between demographic and behavioral factors and at-fault crash involvement, but QIE's primary assumption of representativeness has not yet been validated among young drivers. Determining whether nonresponsible young drivers in clean (i.e., only one driver is responsible) 2-vehicle crashes are reasonably representative of the general young driving population is an important step toward ensuring valid QIE use in young driver studies. We applied previously established validation methods to conduct the first study, to our knowledge, focused on validating the QIE representativeness assumption in a young driver population. We utilized New Jersey's state crash and licensing databases (2008-2012) to examine the representativeness assumption among 17- to 20-year-old nonresponsible drivers involved in clean multivehicle crashes. It has been hypothesized that if not-at-fault drivers in clean 2-vehicle crashes are a true representation of the driving population, it would be expected that nonresponsible drivers in clean 3-or-more-vehicle crashes also represent this same driving population (Jiang and Lyles 2010 ). Thus, we compared distributions of age, gender, and vehicle type among (1) nonresponsible young drivers in clean 2-vehicle crashes and (2) the first nonresponsible young driver in clean crashes involving 3 or more vehicles to (3) all other nonresponsible young drivers in clean crashes involving 3 or more vehicles. Distributions were compared using chi-square tests and conditional logistic regression; analyses were conducted for all young drivers and stratified by license status (intermediate vs. fully licensed drivers), crash location, and time of day of the crash. There were 41,323 nonresponsible drivers in clean 2-vehicle crashes and 6,464 nonresponsible drivers in clean 3-or-more-vehicle crashes. Overall, we found that the distributions of age, gender, and vehicle type were

  15. Climate Change: Implications for the Assumptions, Goals and Methods of Urban Environmental Planning

    Directory of Open Access Journals (Sweden)

    Kristina Hill

    2016-12-01

    Full Text Available As a result of increasing awareness of the implications of global climate change, shifts are becoming necessary and apparent in the assumptions, concepts, goals and methods of urban environmental planning. This review will present the argument that these changes represent a genuine paradigm shift in urban environmental planning. Reflection and action to develop this paradigm shift is critical now and in the next decades, because environmental planning for cities will only become more urgent as we enter a new climate period. The concepts, methods and assumptions that urban environmental planners have relied on in previous decades to protect people, ecosystems and physical structures are inadequate if they do not explicitly account for a rapidly changing regional climate context, specifically from a hydrological and ecological perspective. The over-arching concept of spatial suitability that guided planning in most of the 20th century has already given way to concepts that address sustainability, recognizing the importance of temporality. Quite rapidly, the concept of sustainability has been replaced in many planning contexts by the priority of establishing resilience in the face of extreme disturbance events. Now even this concept of resilience is being incorporated into a novel concept of urban planning as a process of adaptation to permanent, incremental environmental changes. This adaptation concept recognizes the necessity for continued resilience to extreme events, while acknowledging that permanent changes are also occurring as a result of trends that have a clear direction over time, such as rising sea levels. Similarly, the methods of urban environmental planning have relied on statistical data about hydrological and ecological systems that will not adequately describe these systems under a new climate regime. These methods are beginning to be replaced by methods that make use of early warning systems for regime shifts, and process

  16. Evaluation of assumptions for estimating chemical light extinction at U.S. national parks.

    Science.gov (United States)

    Lowenthal, Douglas; Zielinska, Barbara; Samburova, Vera; Collins, Don; Taylor, Nathan; Kumar, Naresh

    2015-03-01

    Studies were conducted at Great Smoky Mountains National Park (NP) (GRSM), Tennessee, Mount Rainier NP (MORA), Washington, and Acadia NP (ACAD), Maine, to evaluate assumptions used to estimate aerosol light extinction from chemical composition. The revised IMPROVE equation calculates light scattering from concentrations of PM2.5 sulfates, nitrates, organic carbon mass (OM), and soil. Organics are assumed to be nonhygroscopic. Organic carbon (OC) is converted to OM with a multiplier of 1.8. Experiments were conducted to evaluate assumptions on aerosol hydration state, the OM/OC ratio, OM hygroscopicity, and mass scattering efficiencies. Sulfates were neutralized by ammonium during winter at GRSM (W, winter) and at MORA during summer but were acidic at ACAD and GRSM (S, summer) during summer. Hygroscopic growth was mostly smooth and continuous, rarely exhibiting hysteresis. Deliquescence was not observed except infrequently during winter at GRSM (W). Water-soluble organic carbon (WSOC) was separated from bulk OC with solid-phase absorbents. The average OM/OC ratios were 2.0, 2.7, 2.1, and 2.2 at GRSM (S), GRSM (W), MORA, and ACAD, respectively. Hygroscopic growth factors (GF) at relative humidity (RH) 90% for aerosols generated from WSOC extracts averaged 1.19, 1.06, 1.13, and 1.16 at GRSM (S), GRSM (W), MORA, and ACAD, respectively. Thus, the assumption that OM is not hygroscopic may lead to underestimation of its contribution to light scattering. Studies at IMPROVE sites conducted in U.S. national parks showed that aerosol organics comprise more PM2.5 mass and absorb more water as a function of relative humidity than is currently assumed by the IMPROVE equation for calculating chemical light extinction. Future strategies for reducing regional haze may therefore need to focus more heavily on understanding the origins and control of anthropogenic sources of organic aerosols.

  17. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  18. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-05-23

    Protein-protein interactions are critically dependent on just a few residues (“hot spots”) at the interfaces. Hot spots make a dominant contribution to the binding free energy and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there exists a need for accurate and reliable computational hot spot prediction methods. Compared to the supervised hot spot prediction algorithms, the semi-supervised prediction methods can take into consideration both the labeled and unlabeled residues in the dataset during the prediction procedure. The transductive support vector machine has been utilized for this task and demonstrated a better prediction performance. To the best of our knowledge, however, none of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue prediction, by considering all the three semisupervised assumptions using nonlinear models. Our algorithm, IterPropMCS, works in an iterative manner. In each iteration, the algorithm first propagates the labels of the labeled residues to the unlabeled ones, along the shortest path between them on a graph, assuming that they lie on a nonlinear manifold. Then it selects the most confident residues as the labeled ones for the next iteration, according to the cluster and smoothness criteria, which is implemented by a nonlinear density estimator. Experiments on a benchmark dataset, using protein structure-based features, demonstrate that our approach is effective in predicting hot spots and compares favorably to other available methods. The results also show that our method outperforms the state-of-the-art transductive learning methods.

  19. The Avalanche Hypothesis and Compression of Morbidity: Testing Assumptions through Cohort-Sequential Analysis.

    Directory of Open Access Journals (Sweden)

    Jordan Silberman

    Full Text Available The compression of morbidity model posits a breakpoint in the adult lifespan that separates an initial period of relative health from a subsequent period of ever increasing morbidity. Researchers often assume that such a breakpoint exists; however, this assumption is hitherto untested.To test the assumption that a breakpoint exists--which we term a morbidity tipping point--separating a period of relative health from a subsequent deterioration in health status. An analogous tipping point for healthcare costs was also investigated.Four years of adults' (N = 55,550 morbidity and costs data were retrospectively analyzed. Data were collected in Pittsburgh, PA between 2006 and 2009; analyses were performed in Rochester, NY and Ann Arbor, MI in 2012 and 2013. Cohort-sequential and hockey stick regression models were used to characterize long-term trajectories and tipping points, respectively, for both morbidity and costs.Morbidity increased exponentially with age (P<.001. A morbidity tipping point was observed at age 45.5 (95% CI, 41.3-49.7. An exponential trajectory was also observed for costs (P<.001, with a costs tipping point occurring at age 39.5 (95% CI, 32.4-46.6. Following their respective tipping points, both morbidity and costs increased substantially (Ps<.001.Findings support the existence of a morbidity tipping point, confirming an important but untested assumption. This tipping point, however, may occur earlier in the lifespan than is widely assumed. An "avalanche of morbidity" occurred after the morbidity tipping point-an ever increasing rate of morbidity progression. For costs, an analogous tipping point and "avalanche" were observed. The time point at which costs began to increase substantially occurred approximately 6 years before health status began to deteriorate.

  20. On the underlying assumptions of threshold Boolean networks as a model for genetic regulatory network behavior.

    Science.gov (United States)

    Tran, Van; McCall, Matthew N; McMurray, Helene R; Almudevar, Anthony

    2013-01-01

    Boolean networks (BoN) are relatively simple and interpretable models of gene regulatory networks. Specifying these models with fewer parameters while retaining their ability to describe complex regulatory relationships is an ongoing methodological challenge. Additionally, extending these models to incorporate variable gene decay rates, asynchronous gene response, and synergistic regulation while maintaining their Markovian nature increases the applicability of these models to genetic regulatory networks (GRN). We explore a previously-proposed class of BoNs characterized by linear threshold functions, which we refer to as threshold Boolean networks (TBN). Compared to traditional BoNs with unconstrained transition functions, these models require far fewer parameters and offer a more direct interpretation. However, the functional form of a TBN does result in a reduction in the regulatory relationships which can be modeled. We show that TBNs can be readily extended to permit self-degradation, with explicitly modeled degradation rates. We note that the introduction of variable degradation compromises the Markovian property fundamental to BoN models but show that a simple state augmentation procedure restores their Markovian nature. Next, we study the effect of assumptions regarding self-degradation on the set of possible steady states. Our findings are captured in two theorems relating self-degradation and regulatory feedback to the steady state behavior of a TBN. Finally, we explore assumptions of synchronous gene response and asynergistic regulation and show that TBNs can be easily extended to relax these assumptions. Applying our methods to the budding yeast cell-cycle network revealed that although the network is complex, its steady state is simplified by the presence of self-degradation and lack of purely positive regulatory cycles.

  1. A computational model to investigate assumptions in the headturn preference procedure

    Directory of Open Access Journals (Sweden)

    Christina eBergmann

    2013-10-01

    Full Text Available In this paper we use a computational model to investigate four assumptions that are tacitly present in interpreting the results of studies on infants' speech processing abilities using the Headturn Preference Procedure (HPP: (1 behavioural differences originate in different processing; (2 processing involves some form of recognition; (3 words are segmented from connected speech; and (4 differences between infants should not affect overall results. In addition, we investigate the impact of two potentially important aspects in the design and execution of the experiments: (a the specific voices used in the two parts on HPP experiments (familiarisation and test and (b the experimenter's criterion for what is a sufficient headturn angle. The model is designed to be maximise cognitive plausibility. It takes real speech as input, and it contains a module that converts the output of internal speech processing and recognition into headturns that can yield real-time listening preference measurements. Internal processing is based on distributed episodic representations in combination with a matching procedure based on the assumptions that complex episodes can be decomposed as positive weighted sums of simpler constituents. Model simulations show that the first assumptions hold under two different definitions of recognition. However, explicit segmentation is not necessary to simulate the behaviours observed in infant studies. Differences in attention span between infants can affect the outcomes of an experiment. The same holds for the experimenter's decision criterion. The speakers used in experiments affect outcomes in complex ways that require further investigation. The paper ends with recommendations for future studies using the HPP.

  2. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions.

    Science.gov (United States)

    Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant was compared for a series of model assumptions. Three different model approaches describing BNR are considered. In the reference case, the original model implementations are used to simulate WWTP1 (ASM1 & 3) and WWTP2 (ASM2d). The second set of models includes a reactive settler, which extends the description of the non-reactive TSS sedimentation and transport in the reference case with the full set of ASM processes. Finally, the third set of models is based on including electron acceptor dependency of biomass decay rates for ASM1 (WWTP1) and ASM2d (WWTP2). The results show that incorporation of a reactive settler: (1) increases the hydrolysis of particulates; (2) increases the overall plant's denitrification efficiency by reducing the S(NOx) concentration at the bottom of the clarifier; (3) increases the oxidation of COD compounds; (4) increases X(OHO) and X(ANO) decay; and, finally, (5) increases the growth of X(PAO) and formation of X(PHA,Stor) for ASM2d, which has a major impact on the whole P removal system. Introduction of electron acceptor dependent decay leads to a substantial increase of the concentration of X(ANO), X(OHO) and X(PAO) in the bottom of the clarifier. The paper ends with a critical discussion of the influence of the different model assumptions, and emphasizes the need for a model user to understand the significant differences in simulation results that are obtained when applying different combinations of 'standard' models.

  3. Influence of simulation assumptions and input parameters on energy balance calculations of residential buildings

    International Nuclear Information System (INIS)

    Dodoo, Ambrose; Tettey, Uniben Yao Ayikoe; Gustavsson, Leif

    2017-01-01

    In this study, we modelled the influence of different simulation assumptions on energy balances of two variants of a residential building, comprising the building in its existing state and with energy-efficient improvements. We explored how selected parameter combinations and variations affect the energy balances of the building configurations. The selected parameters encompass outdoor microclimate, building thermal envelope and household electrical equipment including technical installations. Our modelling takes into account hourly as well as seasonal profiles of different internal heat gains. The results suggest that the impact of parameter interactions on calculated space heating of buildings is somewhat small and relatively more noticeable for an energy-efficient building in contrast to a conventional building. We find that the influence of parameters combinations is more apparent as more individual parameters are varied. The simulations show that a building's calculated space heating demand is significantly influenced by how heat gains from electrical equipment are modelled. For the analyzed building versions, calculated final energy for space heating differs by 9–14 kWh/m 2 depending on the assumed energy efficiency level for electrical equipment. The influence of electrical equipment on calculated final space heating is proportionally more significant for an energy-efficient building compared to a conventional building. This study shows the influence of different simulation assumptions and parameter combinations when varied simultaneously. - Highlights: • Energy balances are modelled for conventional and efficient variants of a building. • Influence of assumptions and parameter combinations and variations are explored. • Parameter interactions influence is apparent as more single parameters are varied. • Calculated space heating demand is notably affected by how heat gains are modelled.

  4. Using Covert Response Activation to Test Latent Assumptions of Formal Decision-Making Models in Humans.

    Science.gov (United States)

    Servant, Mathieu; White, Corey; Montagnini, Anna; Burle, Borís

    2015-07-15

    Most decisions that we make build upon multiple streams of sensory evidence and control mechanisms are needed to filter out irrelevant information. Sequential sampling models of perceptual decision making have recently been enriched by attentional mechanisms that weight sensory evidence in a dynamic and goal-directed way. However, the framework retains the longstanding hypothesis that motor activity is engaged only once a decision threshold is reached. To probe latent assumptions of these models, neurophysiological indices are needed. Therefore, we collected behavioral and EMG data in the flanker task, a standard paradigm to investigate decisions about relevance. Although the models captured response time distributions and accuracy data, EMG analyses of response agonist muscles challenged the assumption of independence between decision and motor processes. Those analyses revealed covert incorrect EMG activity ("partial error") in a fraction of trials in which the correct response was finally given, providing intermediate states of evidence accumulation and response activation at the single-trial level. We extended the models by allowing motor activity to occur before a commitment to a choice and demonstrated that the proposed framework captured the rate, latency, and EMG surface of partial errors, along with the speed of the correction process. In return, EMG data provided strong constraints to discriminate between competing models that made similar behavioral predictions. Our study opens new theoretical and methodological avenues for understanding the links among decision making, cognitive control, and motor execution in humans. Sequential sampling models of perceptual decision making assume that sensory information is accumulated until a criterion quantity of evidence is obtained, from where the decision terminates in a choice and motor activity is engaged. The very existence of covert incorrect EMG activity ("partial error") during the evidence accumulation

  5. Old and New Ideas for Data Screening and Assumption Testing for Exploratory and Confirmatory Factor Analysis

    Science.gov (United States)

    Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip

    2011-01-01

    We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561

  6. Testing the assumption of linear dependence between the rolling friction torque and normal force

    Directory of Open Access Journals (Sweden)

    Alaci Stelian

    2017-01-01

    Full Text Available Rolling friction is present in all nonconforming bodies in contact. A permanent topic is the characterization of the moment of rolling friction. A number of authors accept the hypothesis of linear dependency between the rolling torque and the normal force while other researchers disagree with this assumption. The present paper proposes a method for testing the hypothesis of linear relationship between rolling moment and normal pressing force. A doubly supported cycloidal pendulum is used in two situations: symmetrically and asymmetrically supported, respectively. Under the hypothesis of a linear relationship, the motions of the pendulum should be identical.

  7. Local conservation scores without a priori assumptions on neutral substitution rates.

    Science.gov (United States)

    Dingel, Janis; Hanus, Pavol; Leonardi, Niccolò; Hagenauer, Joachim; Zech, Jürgen; Mueller, Jakob C

    2008-04-11

    Comparative genomics aims to detect signals of evolutionary conservation as an indicator of functional constraint. Surprisingly, results of the ENCODE project revealed that about half of the experimentally verified functional elements found in non-coding DNA were classified as unconstrained by computational predictions. Following this observation, it has been hypothesized that this may be partly explained by biased estimates on neutral evolutionary rates used by existing sequence conservation metrics. All methods we are aware of rely on a comparison with the neutral rate and conservation is estimated by measuring the deviation of a particular genomic region from this rate. Consequently, it is a reasonable assumption that inaccurate neutral rate estimates may lead to biased conservation and constraint estimates. We propose a conservation signal that is produced by local Maximum Likelihood estimation of evolutionary parameters using an optimized sliding window and present a Kullback-Leibler projection that allows multiple different estimated parameters to be transformed into a conservation measure. This conservation measure does not rely on assumptions about neutral evolutionary substitution rates and little a priori assumptions on the properties of the conserved regions are imposed. We show the accuracy of our approach (KuLCons) on synthetic data and compare it to the scores generated by state-of-the-art methods (phastCons, GERP, SCONE) in an ENCODE region. We find that KuLCons is most often in agreement with the conservation/constraint signatures detected by GERP and SCONE while qualitatively very different patterns from phastCons are observed. Opposed to standard methods KuLCons can be extended to more complex evolutionary models, e.g. taking insertion and deletion events into account and corresponding results show that scores obtained under this model can diverge significantly from scores using the simpler model. Our results suggest that discriminating among the

  8. Local conservation scores without a priori assumptions on neutral substitution rates

    Directory of Open Access Journals (Sweden)

    Hagenauer Joachim

    2008-04-01

    Full Text Available Abstract Background Comparative genomics aims to detect signals of evolutionary conservation as an indicator of functional constraint. Surprisingly, results of the ENCODE project revealed that about half of the experimentally verified functional elements found in non-coding DNA were classified as unconstrained by computational predictions. Following this observation, it has been hypothesized that this may be partly explained by biased estimates on neutral evolutionary rates used by existing sequence conservation metrics. All methods we are aware of rely on a comparison with the neutral rate and conservation is estimated by measuring the deviation of a particular genomic region from this rate. Consequently, it is a reasonable assumption that inaccurate neutral rate estimates may lead to biased conservation and constraint estimates. Results We propose a conservation signal that is produced by local Maximum Likelihood estimation of evolutionary parameters using an optimized sliding window and present a Kullback-Leibler projection that allows multiple different estimated parameters to be transformed into a conservation measure. This conservation measure does not rely on assumptions about neutral evolutionary substitution rates and little a priori assumptions on the properties of the conserved regions are imposed. We show the accuracy of our approach (KuLCons on synthetic data and compare it to the scores generated by state-of-the-art methods (phastCons, GERP, SCONE in an ENCODE region. We find that KuLCons is most often in agreement with the conservation/constraint signatures detected by GERP and SCONE while qualitatively very different patterns from phastCons are observed. Opposed to standard methods KuLCons can be extended to more complex evolutionary models, e.g. taking insertion and deletion events into account and corresponding results show that scores obtained under this model can diverge significantly from scores using the simpler model

  9. About tests of the “simplifying” assumption for conditional copulas

    Directory of Open Access Journals (Sweden)

    Derumigny Alexis

    2017-08-01

    Full Text Available We discuss the so-called “simplifying assumption” of conditional copulas in a general framework. We introduce several tests of the latter assumption for non- and semiparametric copula models. Some related test procedures based on conditioning subsets instead of point-wise events are proposed. The limiting distributions of such test statistics under the null are approximated by several bootstrap schemes, most of them being new. We prove the validity of a particular semiparametric bootstrap scheme. Some simulations illustrate the relevance of our results.

  10. A review of some critical assumptions in the relationship between economic activity and freight transport

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Kveiborg, Ole

    2004-01-01

    national accounts. With these data we are able to check some of the assumptions that have commonly been made. Our findings thus have implications for future freight modelling exercises, in particular for what data it is necessary to collect and what relationships it is necessary to seek to model explicitly....... We find that it is necessary to account for changing composition of production across industries, but that the commodity mix within each industry safely can be regarded as constant. Changing value densities account for almost a third of transport growth; however, this is attributable to the first...

  11. Assumption and program of the earlier stage construction of L/ILW disposal site

    International Nuclear Information System (INIS)

    Li Xuequn; Chen Shi; Li Xinbang

    1993-01-01

    The authors analysed the production and treatment of low- and intermediate-level radwastes (L/ILW) in China. Some problems and situation in this field are introduced. Over the past ten years, preliminary efforts have been made by CNNC (China National Nuclear Corporation) in policy, law and rules, developing program, management system, siting, engineering techniques, and safety assessment for radwaste disposal. The investment of the earlier stage work of L/ILW disposal site construction is estimated, the program and assumption to disposal site construction of the L/ILW are reviewed

  12. Untested assumptions: psychological research and credibility assessment in legal decision-making

    Directory of Open Access Journals (Sweden)

    Jane Herlihy

    2015-05-01

    Full Text Available Background: Trauma survivors often have to negotiate legal systems such as refugee status determination or the criminal justice system. Methods & results: We outline and discuss the contribution which research on trauma and related psychological processes can make to two particular areas of law where complex and difficult legal decisions must be made: in claims for refugee and humanitarian protection, and in reporting and prosecuting sexual assault in the criminal justice system. Conclusion: There is a breadth of psychological knowledge that, if correctly applied, would limit the inappropriate reliance on assumptions and myth in legal decision-making in these settings. Specific recommendations are made for further study.

  13. Bases, Assumptions, and Results of the Flowsheet Calculations for the Decision Phase Salt Disposition Alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Dimenna, R.A.; Jacobs, R.A.; Taylor, G.A.; Durate, O.E.; Paul, P.K.; Elder, H.H.; Pike, J.A.; Fowler, J.R.; Rutland, P.L.; Gregory, M.V.; Smith III, F.G.; Hang, T.; Subosits, S.G.; Campbell, S.G.

    2001-03-26

    The High Level Waste (HLW) Salt Disposition Systems Engineering Team was formed on March 13, 1998, and chartered to identify options, evaluate alternatives, and recommend a selected alternative(s) for processing HLW salt to a permitted wasteform. This requirement arises because the existing In-Tank Precipitation process at the Savannah River Site, as currently configured, cannot simultaneously meet the HLW production and Authorization Basis safety requirements. This engineering study was performed in four phases. This document provides the technical bases, assumptions, and results of this engineering study.

  14. Experimental evaluation of the pure configurational stress assumption in the flow dynamics of entangled polymer melts

    DEFF Research Database (Denmark)

    Rasmussen, Henrik K.; Bejenariu, Anca Gabriela; Hassager, Ole

    2010-01-01

    to the flow in the non-linear flow regime. This has allowed highly elastic measurements within the limit of pure orientational stress, as the time of the flow was considerably smaller than the Rouse time. A Doi-Edwards [J. Chem. Soc., Faraday Trans. 2 74, 1818-1832 (1978)] type of constitutive model...... with the assumption of pure configurational stress was accurately able to predict the startup as well as the reversed flow behavior. This confirms that this commonly used theoretical picture for the flow of polymeric liquids is a correct physical principle to apply. c 2010 The Society of Rheology. [DOI: 10.1122/1.3496378]...

  15. Expressing Environment Assumptions and Real-time Requirements for a Distributed Embedded System with Shared Variables

    DEFF Research Database (Denmark)

    Tjell, Simon; Fernandes, João Miguel

    2008-01-01

    In a distributed embedded system, it is often necessary to share variables among its computing nodes to allow the distribution of control algorithms. It is therefore necessary to include a component in each node that provides the service of variable sharing. For that type of component, this paper...... for the component. The CPN model can be used to validate the environment assumptions and the requirements. The validation is performed by execution of the model during which traces of events and states are automatically generated and evaluated against the requirements....

  16. Tests of data quality, scaling assumptions, and reliability of the Danish SF-36

    DEFF Research Database (Denmark)

    Bjorner, J B; Damsgaard, M T; Watt, T

    1998-01-01

    We used general population data (n = 4084) to examine data completeness, response consistency, tests of scaling assumptions, and reliability of the Danish SF-36 Health Survey. We compared traditional multitrait scaling analyses to analyses using polychoric correlations and Spearman correlations...... discriminant validity, equal item-own scale correlations, and equal variances) were satisfactory in the total sample and in all subgroups. The SF-36 could discriminate between levels of health in all subgroups, but there were skewness, kurtosis, and ceiling effects in many subgroups (elderly people and people...

  17. What Were We Thinking? Five Erroneous Assumptions That Have Fueled Specialized Interventions for Adolescents Who Have Sexually Offended

    Science.gov (United States)

    Worling, James R.

    2013-01-01

    Since the early 1980s, five assumptions have influenced the assessment, treatment, and community supervision of adolescents who have offended sexually. In particular, interventions with this population have been informed by the assumptions that these youth are (i) deviant, (ii) delinquent, (iii) disordered, (iv) deficit-ridden, and (v) deceitful.…

  18. Testing the Assumptions of Sequential Bifurcation for Factor Screening (revision of CentER DP 2015-034)

    NARCIS (Netherlands)

    Shi, Wen; Kleijnen, J.P.C.

    2017-01-01

    Sequential bifurcation (or SB) is an efficient and effective factor-screening method; i.e., SB quickly identifies the important factors (inputs) in experiments with simulation models that have very many factors—provided the SB assumptions are valid. The specific SB assumptions are: (i) a secondorder

  19. 75 FR 6857 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Valuing and Paying...

    Science.gov (United States)

    2010-02-12

    ... with valuation dates in March 2010. Interest assumptions are also published on PBGC's Web site ( http... paying plan benefits of terminating single-employer plans covered by title IV of the Employee Retirement... financial and annuity markets. These interest assumptions are found in two PBGC regulations: the regulation...

  20. 75 FR 2437 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Valuing and Paying...

    Science.gov (United States)

    2010-01-15

    ... with valuation dates in February 2010. Interest assumptions are also published on PBGC's Web site... and paying plan benefits of terminating single-employer plans covered by title IV of the Employee... in the financial and annuity markets. These interest assumptions are found in two PBGC regulations...

  1. 75 FR 19542 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Valuing and Paying...

    Science.gov (United States)

    2010-04-15

    ... amends the benefit payments regulation to adopt interest assumptions for plans with valuation dates in... single-employer plans covered by title IV of the Employee Retirement Income Security Act of 1974. The interest assumptions are intended to reflect current conditions in the financial and annuity markets. These...

  2. 75 FR 41091 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Valuing and Paying...

    Science.gov (United States)

    2010-07-15

    ... with valuation dates in August 2010. Interest assumptions are also published on PBGC's Web site ( http... paying plan benefits of terminating single-employer plans covered by title IV of the Employee Retirement... financial and annuity markets. These interest assumptions are found in two PBGC regulations: the regulation...

  3. 75 FR 49407 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Valuing and Paying...

    Science.gov (United States)

    2010-08-13

    ... with valuation dates in September 2010. Interest assumptions are also published on PBGC's Web site... and paying plan benefits of terminating single-employer plans covered by title IV of the Employee... in the financial and annuity markets. These interest assumptions are found in two PBGC regulations...

  4. 75 FR 27189 - Benefits Payable in Terminated Single-Employer Plans; Interest Assumptions for Valuing and Paying...

    Science.gov (United States)

    2010-05-14

    ... with valuation dates in June 2010. Interest assumptions are also published on PBGC's Web site ( http... paying plan benefits of terminating single-employer plans covered by title IV of the Employee Retirement... financial and annuity markets. These interest assumptions are found in two PBGC regulations: The regulation...

  5. When to refrain from using likelihood surface methods for geographic offender profiling: An ex ante test of assumptions

    NARCIS (Netherlands)

    van Koppen, M.V.; Elffers, H.; Ruiter, S.

    2011-01-01

    Likelihood surface methods for geographic offender profiling rely on several assumptions regarding the underlying location choice mechanism of an offender. We propose an ex ante test for checking whether a given set of crime locations is compatible with two necessary assumptions: circular symmetry

  6. Assessing the skill of hydrology models at simulating the water cycle in the HJ Andrews LTER: Assumptions, strengths and weaknesses

    Science.gov (United States)

    Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...

  7. 26 CFR 1.752-6 - Partnership assumption of partner's section 358(h)(3) liability after October 18, 1999, and...

    Science.gov (United States)

    2010-04-01

    ... general. If, in a transaction described in section 721(a), a partnership assumes a liability (defined in...) does not apply to an assumption of a liability (defined in section 358(h)(3)) by a partnership as part... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Partnership assumption of partner's section 358...

  8. Rethinking our assumptions about the evolution of bird song and other sexually dimorphic signals

    Directory of Open Access Journals (Sweden)

    J. Jordan Price

    2015-04-01

    Full Text Available Bird song is often cited as a classic example of a sexually-selected ornament, in part because historically it has been considered a primarily male trait. Recent evidence that females also sing in many songbird species and that sexual dimorphism in song is often the result of losses in females rather than gains in males therefore appears to challenge our understanding of the evolution of bird song through sexual selection. Here I propose that these new findings do not necessarily contradict previous research, but rather they disagree with some of our assumptions about the evolution of sexual dimorphisms in general and female song in particular. These include misconceptions that current patterns of elaboration and diversity in each sex reflect past rates of change and that levels of sexual dimorphism necessarily reflect levels of sexual selection. Using New World blackbirds (Icteridae as an example, I critically evaluate these past assumptions in light of new phylogenetic evidence. Understanding the mechanisms underlying such sexually dimorphic traits requires a clear understanding of their evolutionary histories. Only then can we begin to ask the right questions.

  9. The effects of behavioral and structural assumptions in artificial stock market

    Science.gov (United States)

    Liu, Xinghua; Gregor, Shirley; Yang, Jianmei

    2008-04-01

    Recent literature has developed the conjecture that important statistical features of stock price series, such as the fat tails phenomenon, may depend mainly on the market microstructure. This conjecture motivated us to investigate the roles of both the market microstructure and agent behavior with respect to high-frequency returns and daily returns. We developed two simple models to investigate this issue. The first one is a stochastic model with a clearing house microstructure and a population of zero-intelligence agents. The second one has more behavioral assumptions based on Minority Game and also has a clearing house microstructure. With the first model we found that a characteristic of the clearing house microstructure, namely the clearing frequency, can explain fat tail, excess volatility and autocorrelation phenomena of high-frequency returns. However, this feature does not cause the same phenomena in daily returns. So the Stylized Facts of daily returns depend mainly on the agents’ behavior. With the second model we investigated the effects of behavioral assumptions on daily returns. Our study implicates that the aspects which are responsible for generating the stylized facts of high-frequency returns and daily returns are different.

  10. Sensitivity of Population Size Estimation for Violating Parametric Assumptions in Log-linear Models

    Directory of Open Access Journals (Sweden)

    Gerritse Susanna C.

    2015-09-01

    Full Text Available An important quality aspect of censuses is the degree of coverage of the population. When administrative registers are available undercoverage can be estimated via capture-recapture methodology. The standard approach uses the log-linear model that relies on the assumption that being in the first register is independent of being in the second register. In models using covariates, this assumption of independence is relaxed into independence conditional on covariates. In this article we describe, in a general setting, how sensitivity analyses can be carried out to assess the robustness of the population size estimate. We make use of log-linear Poisson regression using an offset, to simulate departure from the model. This approach can be extended to the case where we have covariates observed in both registers, and to a model with covariates observed in only one register. The robustness of the population size estimate is a function of implied coverage: as implied coverage is low the robustness is low. We conclude that it is important for researchers to investigate and report the estimated robustness of their population size estimate for quality reasons. Extensions are made to log-linear modeling in case of more than two registers and the multiplier method

  11. How do rigid-lid assumption affect LES simulation results at high Reynolds flows?

    Science.gov (United States)

    Khosronejad, Ali; Farhadzadeh, Ali; SBU Collaboration

    2017-11-01

    This research is motivated by the work of Kara et al., JHE, 2015. They employed LES to model flow around a model of abutment at a Re number of 27,000. They showed that first-order turbulence characteristics obtained by rigid-lid (RL) assumption compares fairly well with those of level-set (LS) method. Concerning the second-order statistics, however, their simulation results showed a significant dependence on the method used to describe the free surface. This finding can have important implications for open channel flow modeling. The Reynolds number for typical open channel flows, however, could be much larger than that of Kara et al.'s test case. Herein, we replicate the reported study by augmenting the geometric and hydraulic scales to reach a Re number of one order of magnitude larger ( 200,000). The Virtual Flow Simulator (VFS-Geophysics) model in its LES mode is used to simulate the test case using both RL and LS methods. The computational results are validated using measured flow and free-surface data from our laboratory experiments. Our goal is to investigate the effects of RL assumption on both first-order and second order statistics at high Reynolds numbers that occur in natural waterways. Acknowledgment: Computational resources are provided by the Center of Excellence in Wireless & Information Technology (CEWIT) of Stony Brook University.

  12. Testing Mean Differences among Groups: Multivariate and Repeated Measures Analysis with Minimal Assumptions.

    Science.gov (United States)

    Bathke, Arne C; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne

    2018-03-22

    To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer's disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved.

  13. Bias in regression coefficient estimates when assumptions for handling missing data are violated: a simulation study

    Directory of Open Access Journals (Sweden)

    Sander MJ van Kuijk

    2016-03-01

    Full Text Available BackgroundThe purpose of this simulation study is to assess the performance of multiple imputation compared to complete case analysis when assumptions of missing data mechanisms are violated.MethodsThe authors performed a stochastic simulation study to assess the performance of Complete Case (CC analysis and Multiple Imputation (MI with different missing data mechanisms (missing completely at random (MCAR, at random (MAR, and not at random (MNAR. The study focused on the point estimation of regression coefficients and standard errors.ResultsWhen data were MAR conditional on Y, CC analysis resulted in biased regression coefficients; they were all underestimated in our scenarios. In these scenarios, analysis after MI gave correct estimates. Yet, in case of MNAR MI yielded biased regression coefficients, while CC analysis performed well.ConclusionThe authors demonstrated that MI was only superior to CC analysis in case of MCAR or MAR. In some scenarios CC may be superior over MI. Often it is not feasible to identify the reason why data in a given dataset are missing. Therefore, emphasis should be put on reporting the extent of missing values, the method used to address them, and the assumptions that were made about the mechanism that caused missing data.

  14. “Marginal land” for energy crops: Exploring definitions and embedded assumptions

    International Nuclear Information System (INIS)

    Shortall, O.K.

    2013-01-01

    The idea of using less productive or “marginal land” for energy crops is promoted as a way to overcome the previous land use controversies faced by biofuels. It is argued that marginal land use would not compete with food production, is widely available and would incur fewer environmental impacts. This term is notoriously vague however, as are the details of how marginal land use for energy crops would work in practice. This paper explores definitions of the term “marginal land” in academic, consultancy, NGO, government and industry documents in the UK. It identifies three separate definitions of the term: land unsuitable for food production; ambiguous lower quality land; and economically marginal land. It probes these definitions further by exploring the technical, normative and political assumptions embedded within them. It finds that the first two definitions are normatively motivated: this land should be used to overcome controversies and the latter definition is predictive: this land is likely to be used. It is important that the different advantages, disadvantages and implications of the definitions are spelled out so definitions are not conflated to create unrealistic expectations about the role of marginal land in overcoming biofuels land use controversies. -- Highlights: •Qualitative methods were used to explore definitions of the term “marginal land”. •Three definitions were identified. •Two definitions focus on overcoming biomass land use controversies. •One definition predicts what land will be used for growing biomass. •Definitions contain problematic assumptions

  15. Questioning the foundations of physics which of our fundamental assumptions are wrong?

    CERN Document Server

    Foster, Brendan; Merali, Zeeya

    2015-01-01

    The essays in this book look at way in which the fundaments of physics might need to be changed in order to make progress towards a unified theory. They are based on the prize-winning essays submitted to the FQXi essay competition “Which of Our Basic Physical Assumptions Are Wrong?”, which drew over 270 entries. As Nobel Laureate physicist Philip W. Anderson realized, the key to understanding nature’s reality is not anything “magical”, but the right attitude, “the focus on asking the right questions, the willingness to try (and to discard) unconventional answers, the sensitive ear for phoniness, self-deception, bombast, and conventional but unproven assumptions.” The authors of the eighteen prize-winning essays have, where necessary, adapted their essays for the present volume so as to (a) incorporate the community feedback generated in the online discussion of the essays, (b) add new material that has come to light since their completion and (c) to ensure accessibility to a broad audience of re...

  16. Uncovering Implicit Assumptions: a Large-Scale Study on Students' Mental Models of Diffusion

    Science.gov (United States)

    Stains, Marilyne; Sevian, Hannah

    2015-12-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in the gaseous medium. A large sample of data ( N = 423) from students across grade 8 (age 13) through upper-level undergraduate was subjected to a cluster analysis to determine the main mental models present. The cluster analysis resulted in a reduced data set ( N = 308), and then, mental models were ascertained from robust clusters. The mental models that emerged from analysis were triangulated through interview data and characterised according to underlying implicit assumptions that guide and constrain thinking about diffusion of a solute in a gaseous medium. Impacts of students' level of preparation in science and relationships of mental models to science disciplines studied by students were examined. Implications are discussed for the value of this approach to identify typical mental models and the sets of implicit assumptions that constrain them.

  17. Retrieval of Polar Stratospheric Cloud Microphysical Properties from Lidar Measurements: Dependence on Particle Shape Assumptions

    Science.gov (United States)

    Reichardt, J.; Reichardt, S.; Yang, P.; McGee, T. J.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    A retrieval algorithm has been developed for the microphysical analysis of polar stratospheric cloud (PSC) optical data obtained using lidar instrumentation. The parameterization scheme of the PSC microphysical properties allows for coexistence of up to three different particle types with size-dependent shapes. The finite difference time domain (FDTD) method has been used to calculate optical properties of particles with maximum dimensions equal to or less than 2 mu m and with shapes that can be considered more representative of PSCs on the scale of individual crystals than the commonly assumed spheroids. Specifically. these are irregular and hexagonal crystals. Selection of the optical parameters that are input to the inversion algorithm is based on a potential data set such as that gathered by two of the lidars on board the NASA DC-8 during the Stratospheric Aerosol and Gas Experiment 0 p (SAGE) Ozone Loss Validation experiment (SOLVE) campaign in winter 1999/2000: the Airborne Raman Ozone and Temperature Lidar (AROTEL) and the NASA Langley Differential Absorption Lidar (DIAL). The 0 microphysical retrieval algorithm has been applied to study how particle shape assumptions affect the inversion of lidar data measured in leewave PSCs. The model simulations show that under the assumption of spheroidal particle shapes, PSC surface and volume density are systematically smaller than the FDTD-based values by, respectively, approximately 10-30% and approximately 5-23%.

  18. Temporal Distinctiveness in Task Switching: Assessing the Mixture-Distribution Assumption

    Directory of Open Access Journals (Sweden)

    James A Grange

    2016-02-01

    Full Text Available In task switching, increasing the response--cue interval has been shown to reduce the switch cost. This has been attributed to a time-based decay process influencing the activation of memory representations of tasks (task-sets. Recently, an alternative account based on interference rather than decay has been successfully applied to this data (Horoufchin et al., 2011. In this account, variation of the RCI is thought to influence the temporal distinctiveness (TD of episodic traces in memory, thus affecting their retrieval probability. This can affect performance as retrieval probability influences response time: If retrieval succeeds, responding is fast due to positive priming; if retrieval fails, responding is slow, due to having to perform the task via a slow algorithmic process. This account---and a recent formal model (Grange & Cross, 2015---makes the strong prediction that all RTs are a mixture of one of two processes: a fast process when retrieval succeeds, and a slow process when retrieval fails. The present paper assesses the evidence for this mixture-distribution assumption in TD data. In a first section, statistical evidence for mixture-distributions is found using the fixed-point property test. In a second section, a mathematical process model with mixture-distributions at its core is fitted to the response time distribution data. Both approaches provide good evidence in support of the mixture-distribution assumption, and thus support temporal distinctiveness accounts of the data.

  19. Pore Formation During Solidification of Aluminum: Reconciliation of Experimental Observations, Modeling Assumptions, and Classical Nucleation Theory

    Science.gov (United States)

    Yousefian, Pedram; Tiryakioğlu, Murat

    2018-02-01

    An in-depth discussion of pore formation is presented in this paper by first reinterpreting in situ observations reported in the literature as well as assumptions commonly made to model pore formation in aluminum castings. The physics of pore formation is reviewed through theoretical fracture pressure calculations based on classical nucleation theory for homogeneous and heterogeneous nucleation, with and without dissolved gas, i.e., hydrogen. Based on the fracture pressure for aluminum, critical pore size and the corresponding probability of vacancies clustering to form that size have been calculated using thermodynamic data reported in the literature. Calculations show that it is impossible for a pore to nucleate either homogeneously or heterogeneously in aluminum, even with dissolved hydrogen. The formation of pores in aluminum castings can only be explained by inflation of entrained surface oxide films (bifilms) under reduced pressure and/or with dissolved gas, which involves only growth, avoiding any nucleation problem. This mechanism is consistent with the reinterpretations of in situ observations as well as the assumptions made in the literature to model pore formation.

  20. Assessing moderated mediation in linear models requires fewer confounding assumptions than assessing mediation.

    Science.gov (United States)

    Loeys, Tom; Talloen, Wouter; Goubert, Liesbet; Moerkerke, Beatrijs; Vansteelandt, Stijn

    2016-11-01

    It is well known from the mediation analysis literature that the identification of direct and indirect effects relies on strong no unmeasured confounding assumptions of no unmeasured confounding. Even in randomized studies the mediator may still be correlated with unobserved prognostic variables that affect the outcome, in which case the mediator's role in the causal process may not be inferred without bias. In the behavioural and social science literature very little attention has been given so far to the causal assumptions required for moderated mediation analysis. In this paper we focus on the index for moderated mediation, which measures by how much the mediated effect is larger or smaller for varying levels of the moderator. We show that in linear models this index can be estimated without bias in the presence of unmeasured common causes of the moderator, mediator and outcome under certain conditions. Importantly, one can thus use the test for moderated mediation to support evidence for mediation under less stringent confounding conditions. We illustrate our findings with data from a randomized experiment assessing the impact of being primed with social deception upon observer responses to others' pain, and from an observational study of individuals who ended a romantic relationship assessing the effect of attachment anxiety during the relationship on mental distress 2 years after the break-up. © 2016 The British Psychological Society.

  1. Omnibus tests of the martingale assumption in the analysis of recurrent failure time data.

    Science.gov (United States)

    Jones, C L; Harrington, D P

    2001-06-01

    The Andersen-Gill multiplicative intensity (MI) model is well-suited to the analysis of recurrent failure time data. The fundamental assumption of the MI model is that the process Mi(t) for subjects i = 1, ..., n, defined to be the difference between a subject's counting process and compensator, i.e., Ni(t) - Ai(t); t > 0, is a martingale with respect to some filtration. We propose omnibus procedures for testing this assumption. The methods are based on transformations of the estimated martingale residual process Mi(t) a function of consistent estimates of the log-intensity ratios and the baseline cumulative hazard. Under a correctly specified model, the expected value of Mi(t) is approximately equal to zero with approximately uncorrelated increments. These properties are exploited in the proposed testing procedures. We examine the effects of censoring and covariate effects on the operating characteristics of the proposed methods via simulation. The procedures are most sensitive to the omission of a time-varying continuous covariate. We illustrate use of the methods in an analysis of data from a clinical trial involving patients with chronic granulatomous disease.

  2. Impact of theoretical assumptions in the determination of the neutrino effective number from future CMB measurements

    Science.gov (United States)

    Capparelli, Ludovico; Di Valentino, Eleonora; Melchiorri, Alessandro; Chluba, Jens

    2018-03-01

    One of the major goals of future cosmic microwave background (CMB) measurements is the accurate determination of the effective number of neutrinos Neff. Reaching an experimental sensitivity of Δ Neff=0.013 could indeed falsify the presence of any nonstandard relativistic particles at 95% C.L. In this paper, we test how this future constraint can be affected by the removal of two common assumptions: a negligible running of the inflationary spectral index nrun and a precise determination of the neutron lifetime τn. We first show that the constraints on Neff could be significantly biased by the unaccounted presence of a running of the spectral index. Considering the Stage-IV experiment, a negative running of d n /d ln k =-0.002 could mimic a positive variation of Δ Neff=0.03 . Moreover, given the current discrepancies between experimental measurements of the neutron lifetime τn, we show that the assumption of a conservative error of Δ τn˜10 s could cause a systematic error of Δ Neff=0.02 . Complementary cosmological constraints on the running of the spectral index and a solution to the neutron lifetime discrepancy are therefore needed for an accurate and reliable future CMB bound of Neff at the percent level.

  3. Tolerance values of benthic macroinvertebrates for stream biomonitoring: assessment of assumptions underlying scoring systems worldwide.

    Science.gov (United States)

    Chang, Feng-Hsun; Lawrence, Justin E; Rios-Touma, Blanca; Resh, Vincent H

    2014-04-01

    Tolerance values (TVs) based on benthic macroinvertebrates are one of the most widely used tools for monitoring the biological impacts of water pollution, particularly in streams and rivers. We compiled TVs of benthic macroinvertebrates from 29 regions around the world to test 11 basic assumptions about pollution tolerance, that: (1) Arthropoda are macroinvertebrates macroinvertebrate taxa < Isopoda + Gastropoda + Hirudinea; (6) Ephemeroptera + Plecoptera + Trichoptera (EPT) < Odonata + Coleoptera + Heteroptera (OCH); (7) EPT < non-EPT insects; (8) Diptera < Insecta; (9) Bivalvia < Gastropoda; (10) Baetidae < other Ephemeroptera; and (11) Hydropsychidae < other Trichoptera. We found that the first eight of these 11 assumptions were supported despite regional variability. In addition, we examined the effect of Best Professional Judgment (BPJ) and non-independence of TVs among countries by performing all analyses using subsets of the original dataset. These subsets included a group based on those systems using TVs that were derived from techniques other than BPJ, and groups based on methods used for TV assignment. The results obtained from these subsets and the entire dataset are similar. We also made seven a priori hypotheses about the regional similarity of TVs based on geography. Only one of these was supported. Development of TVs and the reporting of how they are assigned need to be more rigorous and be better described.

  4. Implications of genome wide association studies for addiction: are our a priori assumptions all wrong?

    Science.gov (United States)

    Hall, F Scott; Drgonova, Jana; Jain, Siddharth; Uhl, George R

    2013-12-01

    Substantial genetic contributions to addiction vulnerability are supported by data from twin studies, linkage studies, candidate gene association studies and, more recently, Genome Wide Association Studies (GWAS). Parallel to this work, animal studies have attempted to identify the genes that may contribute to responses to addictive drugs and addiction liability, initially focusing upon genes for the targets of the major drugs of abuse. These studies identified genes/proteins that affect responses to drugs of abuse; however, this does not necessarily mean that variation in these genes contributes to the genetic component of addiction liability. One of the major problems with initial linkage and candidate gene studies was an a priori focus on the genes thought to be involved in addiction based upon the known contributions of those proteins to drug actions, making the identification of novel genes unlikely. The GWAS approach is systematic and agnostic to such a priori assumptions. From the numerous GWAS now completed several conclusions may be drawn: (1) addiction is highly polygenic; each allelic variant contributing in a small, additive fashion to addiction vulnerability; (2) unexpected, compared to our a priori assumptions, classes of genes are most important in explaining addiction vulnerability; (3) although substantial genetic heterogeneity exists, there is substantial convergence of GWAS signals on particular genes. This review traces the history of this research; from initial transgenic mouse models based upon candidate gene and linkage studies, through the progression of GWAS for addiction and nicotine cessation, to the current human and transgenic mouse studies post-GWAS. © 2013.

  5. Assessing women's sexuality after cancer therapy: checking assumptions with the focus group technique.

    Science.gov (United States)

    Bruner, D W; Boyd, C P

    1999-12-01

    Cancer and cancer therapies impair sexual health in a multitude of ways. The promotion of sexual health is therefore vital for preserving quality of life and is an integral part of total or holistic cancer management. Nursing, to provide holistic care, requires research that is meaningful to patients as well as the profession to develop educational and interventional studies to promote sexual health and coping. To obtain meaningful research data instruments that are reliable, valid, and pertinent to patients' needs are required. Several sexual functioning instruments were reviewed for this study and found to be lacking in either a conceptual foundation or psychometric validation. Without a defined conceptual framework, authors of the instruments must have made certain assumptions regarding what women undergoing cancer therapy experience and what they perceive as important. To check these assumptions before assessing women's sexuality after cancer therapies in a larger study, a pilot study was designed to compare what women experience and perceive as important regarding their sexuality with what is assessed in several currently available research instruments, using the focus group technique. Based on the focus group findings, current sexual functioning questionnaires may be lacking in pertinent areas of concern for women treated for breast or gynecologic malignancies. Better conceptual foundations may help future questionnaire design. Self-regulation theory may provide an acceptable conceptual framework from which to develop a sexual functioning questionnaire.

  6. Vessel contents of leaves after excision: a test of the Scholander assumption.

    Science.gov (United States)

    Tyree, Melvin T; Cochard, Herve

    2003-09-01

    When petioles of transpiring leaves are cut in the air, according to the 'Scholander assumption', the vessels cut open should fill with air as the water is drained away by tissue rehydration and/or continued transpiration. The distribution of air-filled vessels versus distance from the cut surface should match the distribution of lengths of 'open vessels', i.e. vessels cut open when the leaf is excised. A paint perfusion method was used to estimate the length distribution of open vessels and this was compared with the observed distribution of embolisms by the cryo-SEM method. In the cryo-SEM method, petioles are frozen in liquid nitrogen soon after the petiole is cut. The petioles are then cut at different distances from the original cut surface while frozen and examined in a cryo-SEM facility, where it is easy to distinguish vessels filled with air from those filled with ice. The Scholander assumption was also confirmed by a hydraulic method, which avoided possible freezing artefacts. In petioles of sunflower (Helianthus annuus L) the distribution of embolized vessels agrees with expectations. This is in contrast to a previous study on sunflower where cryo-SEM results did not agree with expectations. The reasons for this disagreement are suggested, but further study is required for a full elucidation.

  7. 7 Mass casualty incidents: a review of triage severity planning assumptions.

    Science.gov (United States)

    Hunt, Paul

    2017-12-01

    Recent events involving a significant number of casualties have emphasised the importance of appropriate preparation for receiving hospitals, especially Emergency Departments, during the initial response phase of a major incident. Development of a mass casualty resilience and response framework in the Northern Trauma Network included a review of existing planning assumptions in order to ensure effective resource allocation, both in local receiving hospitals and system-wide.Existing planning assumptions regarding categorisation by triage level are generally stated as a ratio for P1:P2:P3 of 25%:25%:50% of the total number of injured survivors. This may significantly over-, or underestimate, the number in each level of severity in the case of a large-scale incident. A pilot literature review was conducted of the available evidence from historical incidents in order to gather data regarding the confirmed number of overall casualties, 'critical' cases, admitted cases, and non-urgent or discharged cases. This data was collated and grouped by mechanism in order to calculate an appropriate severity ratio for each incident type. 12 articles regarding mass casualty incidents from the last two decades were identified covering three main incident types: (1) Mass transportation crash, (2) Building fire, and (3) Bomb and related terrorist attacks and involving a total of 3615 injured casualties. The overall mortality rate was calculated as 12.3%. Table 1 summarises the available patient casualty data from each of the specific incidents reported and calculated proportions of critical ('P1'), admitted ('P2'), and non-urgent or ambulatory cases ('P3'). Despite the heterogeneity of data and range of incident type there is sufficient evidence to suggest that current planning assumptions are incorrect and a more refined model is required. An important finding is the variation in proportion of critical cases depending upon the mechanism. For example, a greater than expected proportion

  8. Comparison of risk-dominant scenario assumptions for several TRU waste facilities in the DOE complex

    International Nuclear Information System (INIS)

    Foppe, T.L.; Marx, D.R.

    1999-01-01

    In order to gain a risk management perspective, the DOE Rocky Flats Field Office (RFFO) initiated a survey of other DOE sites regarding risks from potential accidents associated with transuranic (TRU) storage and/or processing facilities. Recently-approved authorization basis documents at the Rocky Flats Environmental Technology Site (RFETS) have been based on the DOE Standard 3011 risk assessment methodology with three qualitative estimates of frequency of occurrence and quantitative estimates of radiological consequences to the collocated worker and the public binned into three severity levels. Risk Class 1 and 2 events after application of controls to prevent or mitigate the accident are designated as risk-dominant scenarios. Accident Evaluation Guidelines for selection of Technical Safety Requirements (TSRs) are based on the frequency and consequence bin assignments to identify controls that can be credited to reduce risk to Risk Class 3 or 4, or that are credited for Risk Class 1 and 2 scenarios that cannot be further reduced. This methodology resulted in several risk-dominant scenarios for either the collocated worker or the public that warranted consideration on whether additional controls should be implemented. RFFO requested the survey because of these high estimates of risks that are primarily due to design characteristics of RFETS TRU waste facilities (i.e., Butler-type buildings without a ventilation and filtration system, and a relatively short distance to the Site boundary). Accident analysis methodologies and key assumptions are being compared for the DOE sites responding to the survey. This includes type of accidents that are risk dominant (e.g., drum explosion, material handling breach, fires, natural phenomena, external events, etc.), source term evaluation (e.g., radionuclide material-at-risk, chemical and physical form, damage ratio, airborne release fraction, respirable fraction, leakpath factors), dispersion analysis (e.g., meteorological

  9. Assumptions about footprint layer heights influence the quantification of emission sources: a case study for Cyprus

    Science.gov (United States)

    Hüser, Imke; Harder, Hartwig; Heil, Angelika; Kaiser, Johannes W.

    2017-09-01

    Lagrangian particle dispersion models (LPDMs) in backward mode are widely used to quantify the impact of transboundary pollution on downwind sites. Most LPDM applications count particles with a technique that introduces a so-called footprint layer (FL) with constant height, in which passing air tracer particles are assumed to be affected by surface emissions. The mixing layer dynamics are represented by the underlying meteorological model. This particle counting technique implicitly assumes that the atmosphere is well mixed in the FL. We have performed backward trajectory simulations with the FLEXPART model starting at Cyprus to calculate the sensitivity to emissions of upwind pollution sources. The emission sensitivity is used to quantify source contributions at the receptor and support the interpretation of ground measurements carried out during the CYPHEX campaign in July 2014. Here we analyse the effects of different constant and dynamic FL height assumptions. The results show that calculations with FL heights of 100 and 300 m yield similar but still discernible results. Comparison of calculations with FL heights constant at 300 m and dynamically following the planetary boundary layer (PBL) height exhibits systematic differences, with daytime and night-time sensitivity differences compensating for each other. The differences at daytime when a well-mixed PBL can be assumed indicate that residual inaccuracies in the representation of the mixing layer dynamics in the trajectories may introduce errors in the impact assessment on downwind sites. Emissions from vegetation fires are mixed up by pyrogenic convection which is not represented in FLEXPART. Neglecting this convection may lead to severe over- or underestimations of the downwind smoke concentrations. Introducing an extreme fire source from a different year in our study period and using fire-observation-based plume heights as reference, we find an overestimation of more than 60  % by the constant FL height

  10. Bioaccumulation factors and the steady state assumption for cesium isotopes in aquatic foodwebs near nuclear facilities

    International Nuclear Information System (INIS)

    Rowan, D.J.

    2013-01-01

    Steady state approaches, such as transfer coefficients or bioaccumulation factors, are commonly used to model the bioaccumulation of 137 Cs in aquatic foodwebs from routine operations and releases from nuclear generating stations and other nuclear facilities. Routine releases from nuclear generating stations and facilities, however, often consist of pulses as liquid waste is stored, analyzed to ensure regulatory compliance and then released. The effect of repeated pulse releases on the steady state assumption inherent in the bioaccumulation factor approach has not been evaluated. In this study, I examine the steady state assumption for aquatic biota by analyzing data for two cesium isotopes in the same biota, one isotope in steady state (stable 133 Cs) from geologic sources and the other released in pulses ( 137 Cs) from reactor operations. I also compare 137 Cs bioaccumulation factors for similar upstream populations from the same system exposed solely to weapon test 137 Cs, and assumed to be in steady state. The steady state assumption appears to be valid for small organisms at lower trophic levels (zooplankton, rainbow smelt and 0+ yellow perch) but not for older and larger fish at higher trophic levels (walleye). Attempts to account for previous exposure and retention through a biokinetics approach had a similar effect on steady state, upstream and non-steady state, downstream populations of walleye, but were ineffective in explaining the more or less constant deviation between fish with steady state exposures and non-steady state exposures of about 2-fold for all age classes of walleye. These results suggest that for large, piscivorous fish, repeated exposure to short duration, pulse releases leads to much higher 137 Cs BAFs than expected from 133 Cs BAFs for the same fish or 137 Cs BAFs for similar populations in the same system not impacted by reactor releases. These results suggest that the steady state approach should be used with caution in any situation

  11. Bioaccumulation factors and the steady state assumption for cesium isotopes in aquatic foodwebs near nuclear facilities.

    Science.gov (United States)

    Rowan, D J

    2013-07-01

    Steady state approaches, such as transfer coefficients or bioaccumulation factors, are commonly used to model the bioaccumulation of (137)Cs in aquatic foodwebs from routine operations and releases from nuclear generating stations and other nuclear facilities. Routine releases from nuclear generating stations and facilities, however, often consist of pulses as liquid waste is stored, analyzed to ensure regulatory compliance and then released. The effect of repeated pulse releases on the steady state assumption inherent in the bioaccumulation factor approach has not been evaluated. In this study, I examine the steady state assumption for aquatic biota by analyzing data for two cesium isotopes in the same biota, one isotope in steady state (stable (133)Cs) from geologic sources and the other released in pulses ((137)Cs) from reactor operations. I also compare (137)Cs bioaccumulation factors for similar upstream populations from the same system exposed solely to weapon test (137)Cs, and assumed to be in steady state. The steady state assumption appears to be valid for small organisms at lower trophic levels (zooplankton, rainbow smelt and 0+ yellow perch) but not for older and larger fish at higher trophic levels (walleye). Attempts to account for previous exposure and retention through a biokinetics approach had a similar effect on steady state, upstream and non-steady state, downstream populations of walleye, but were ineffective in explaining the more or less constant deviation between fish with steady state exposures and non-steady state exposures of about 2-fold for all age classes of walleye. These results suggest that for large, piscivorous fish, repeated exposure to short duration, pulse releases leads to much higher (137)Cs BAFs than expected from (133)Cs BAFs for the same fish or (137)Cs BAFs for similar populations in the same system not impacted by reactor releases. These results suggest that the steady state approach should be used with caution in any

  12. Tale of Two Courthouses: A Critique of the Underlying Assumptions in Chronic Disease Self-Management for Aboriginal People

    Directory of Open Access Journals (Sweden)

    Isabelle Ellis

    2009-12-01

    Full Text Available This article reviews the assumptions that underpin thecommonly implemented Chronic Disease Self-Managementmodels. Namely that there are a clear set of instructions forpatients to comply with, that all health care providers agreewith; and that the health care provider and the patient agreewith the chronic disease self-management plan that wasdeveloped as part of a consultation. These assumptions areevaluated for their validity in the remote health care context,particularly for Aboriginal people. These assumptions havebeen found to lack validity in this context, therefore analternative model to enhance chronic disease care isproposed.

  13. Accelerated Gillespie Algorithm for Gas–Grain Reaction Network Simulations Using Quasi-steady-state Assumption

    Science.gov (United States)

    Chang, Qiang; Lu, Yang; Quan, Donghui

    2017-12-01

    Although the Gillespie algorithm is accurate in simulating gas–grain reaction networks, so far its computational cost is so expensive that it cannot be used to simulate chemical reaction networks that include molecular hydrogen accretion or the chemical evolution of protoplanetary disks. We present an accelerated Gillespie algorithm that is based on a quasi-steady-state assumption with the further approximation that the population distribution of transient species depends only on the accretion and desorption processes. The new algorithm is tested against a few reaction networks that are simulated by the regular Gillespie algorithm. We found that the less likely it is that transient species are formed and destroyed on grain surfaces, the more accurate the new method is. We also apply the new method to simulate reaction networks that include molecular hydrogen accretion. The results show that surface chemical reactions involving molecular hydrogen are not important for the production of surface species under standard physical conditions of dense molecular clouds.

  14. Bootstrapping realized volatility and realized beta under a local Gaussianity assumption

    DEFF Research Database (Denmark)

    Hounyo, Ulrich

    The main contribution of this paper is to propose a new bootstrap method for statistics based on high frequency returns. The new method exploits the local Gaussianity and the local constancy of volatility of high frequency returns, two assumptions that can simplify inference in the high frequency...... context, as recently explained by Mykland and Zhang (2009). Our main contributions are as follows. First, we show that the local Gaussian bootstrap is firstorder consistent when used to estimate the distributions of realized volatility and ealized betas. Second, we show that the local Gaussian bootstrap...... matches accurately the first four cumulants of realized volatility, implying that this method provides third-order refinements. This is in contrast with the wild bootstrap of Gonçalves and Meddahi (2009), which is only second-order correct. Third, we show that the local Gaussian bootstrap is able...

  15. Washington International Renewable Energy Conference 2008 Pledges: Methodology and Assumptions Summary

    Energy Technology Data Exchange (ETDEWEB)

    Babiuch, B.; Bilello, D. E.; Cowlin, S. C.; Mann, M.; Wise, A.

    2008-08-01

    The 2008 Washington International Renewable Energy Conference (WIREC) was held in Washington, D.C., from March 4-6, 2008, and involved nearly 9,000 people from 125 countries. The event brought together worldwide leaders in renewable energy (RE) from governments, international organizations, nongovernmental organizations, and the private sector to discuss the role that renewables can play in alleviating poverty, growing economies, and passing on a healthy planet to future generations. The conference concluded with more than 140 governments, international organizations, and private-sector representatives pledging to advance the uptake of renewable energy. The U.S. government authorized the National Renewable Energy Laboratory (NREL) to estimate the carbon dioxide (CO2) savings that would result from the pledges made at the 2008 conference. This report describes the methodology and assumptions used by NREL in quantifying the potential CO2 reductions derived from those pledges.

  16. HARDINESS, WORLD ASSUMPTIONS, MOTIVATION OF ATHLETES OF CONTACT AND NOT CONTACT KINDS OF SPORT

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Molchanova

    2017-04-01

    Full Text Available Investigation of personal psychological specificity of athletes of contact (freestyle wrestling and not contact (archery kinds of sport were carried out. Pronounced deviation in hardiness, world assumptions, motives for sport doing were obtained. In particularly, archery athletes possess higher values of hardiness and positively view the world, than wrestlers, while possess less motives for sport doing as “successful for life quality and skills” and “physical perfection”. Thus for athletes not contact kinds of sports rather coping in permanent stressed conditions are predicted. The obtained results are practically important for counseling work of sport psychologists and moreover they could be a basement for training teach programs and challenge stress overcoming programs.

  17. Cultural values embodying universal norms: a critique of a popular assumption about cultures and human rights.

    Science.gov (United States)

    Jing-Bao, Nie

    2005-09-01

    In Western and non-Western societies, it is a widely held belief that the concept of human rights is, by and large, a Western cultural norm, often at odds with non-Western cultures and, therefore, not applicable in non-Western societies. The Universal Draft Declaration on Bioethics and Human Rights reflects this deep-rooted and popular assumption. By using Chinese culture(s) as an illustration, this article points out the problems of this widespread misconception and stereotypical view of cultures and human rights. It highlights the often ignored positive elements in Chinese cultures that promote and embody universal human values such as human dignity and human rights. It concludes, accordingly, with concrete suggestions on how to modify the Declaration.

  18. Ecological risk of anthropogenic pollutants to reptiles: Evaluating assumptions of sensitivity and exposure.

    Science.gov (United States)

    Weir, Scott M; Suski, Jamie G; Salice, Christopher J

    2010-12-01

    A large data gap for reptile ecotoxicology still persists; therefore, ecological risk assessments of reptiles usually incorporate the use of surrogate species. This necessitates that (1) the surrogate is at least as sensitive as the target taxon and/or (2) exposures to the surrogate are greater than that of the target taxon. We evaluated these assumptions for the use of birds as surrogates for reptiles. Based on a survey of the literature, birds were more sensitive than reptiles in less than 1/4 of the chemicals investigated. Dietary and dermal exposure modeling indicated that exposure to reptiles was relatively high, particularly when the dermal route was considered. We conclude that caution is warranted in the use of avian receptors as surrogates for reptiles in ecological risk assessment and emphasize the need to better understand the magnitude and mechanism of contaminant exposure in reptiles to improve exposure and risk estimation. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Self-transcendent positive emotions increase spirituality through basic world assumptions.

    Science.gov (United States)

    Van Cappellen, Patty; Saroglou, Vassilis; Iweins, Caroline; Piovesana, Maria; Fredrickson, Barbara L

    2013-01-01

    Spirituality has mostly been studied in psychology as implied in the process of overcoming adversity, being triggered by negative experiences, and providing positive outcomes. By reversing this pathway, we investigated whether spirituality may also be triggered by self-transcendent positive emotions, which are elicited by stimuli appraised as demonstrating higher good and beauty. In two studies, elevation and/or admiration were induced using different methods. These emotions were compared to two control groups, a neutral state and a positive emotion (mirth). Self-transcendent positive emotions increased participants' spirituality (Studies 1 and 2), especially for the non-religious participants (Study 1). Two basic world assumptions, i.e., belief in life as meaningful (Study 1) and in the benevolence of others and the world (Study 2) mediated the effect of these emotions on spirituality. Spirituality should be understood not only as a coping strategy, but also as an upward spiralling pathway to and from self-transcendent positive emotions.

  20. Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption

    Directory of Open Access Journals (Sweden)

    Zheping Yan

    2014-01-01

    Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.