WorldWideScience

Sample records for models typically assume

  1. Modelling object typicality in description logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-12-01

    Full Text Available in the context under consideration, than those lower down. For any given class C, we assume that all objects in the appli- cation domain that are in (the interpretation of) C are more typical of C than those not in C. This is a technical construction which... to be modular partial orders, i.e. reflexive, transitive, anti- symmetric relations such that, for all a, b, c in ∆I , if a and b are incomparable and a is strictly below c, then b is also strictly below c. Modular partial orders have the effect...

  2. A Model for Teacher Effects from Longitudinal Data without Assuming Vertical Scaling

    Science.gov (United States)

    Mariano, Louis T.; McCaffrey, Daniel F.; Lockwood, J. R.

    2010-01-01

    There is an increasing interest in using longitudinal measures of student achievement to estimate individual teacher effects. Current multivariate models assume each teacher has a single effect on student outcomes that persists undiminished to all future test administrations (complete persistence [CP]) or can diminish with time but remains…

  3. Modeling turbulent/chemistry interactions using assumed pdf methods

    Science.gov (United States)

    Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.

    1992-01-01

    Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.

  4. Monitoring Assumptions in Assume-Guarantee Contracts

    Directory of Open Access Journals (Sweden)

    Oleg Sokolsky

    2016-05-01

    Full Text Available Pre-deployment verification of software components with respect to behavioral specifications in the assume-guarantee form does not, in general, guarantee absence of errors at run time. This is because assumptions about the environment cannot be discharged until the environment is fixed. An intuitive approach is to complement pre-deployment verification of guarantees, up to the assumptions, with post-deployment monitoring of environment behavior to check that the assumptions are satisfied at run time. Such a monitor is typically implemented by instrumenting the application code of the component. An additional challenge for the monitoring step is that environment behaviors are typically obtained through an I/O library, which may alter the component's view of the input format. This transformation requires us to introduce a second pre-deployment verification step to ensure that alarms raised by the monitor would indeed correspond to violations of the environment assumptions. In this paper, we describe an approach for constructing monitors and verifying them against the component assumption. We also discuss limitations of instrumentation-based monitoring and potential ways to overcome it.

  5. Some consequences of assuming simple patterns for the treatment effect over time in a linear mixed model.

    Science.gov (United States)

    Bamia, Christina; White, Ian R; Kenward, Michael G

    2013-07-10

    Linear mixed models are often used for the analysis of data from clinical trials with repeated quantitative outcomes. This paper considers linear mixed models where a particular form is assumed for the treatment effect, in particular constant over time or proportional to time. For simplicity, we assume no baseline covariates and complete post-baseline measures, and we model arbitrary mean responses for the control group at each time. For the variance-covariance matrix, we consider an unstructured model, a random intercepts model and a random intercepts and slopes model. We show that the treatment effect estimator can be expressed as a weighted average of the observed time-specific treatment effects, with weights depending on the covariance structure and the magnitude of the estimated variance components. For an assumed constant treatment effect, under the random intercepts model, all weights are equal, but in the random intercepts and slopes and the unstructured models, we show that some weights can be negative: thus, the estimated treatment effect can be negative, even if all time-specific treatment effects are positive. Our results suggest that particular models for the treatment effect combined with particular covariance structures may result in estimated treatment effects of unexpected magnitude and/or direction. Methods are illustrated using a Parkinson's disease trial. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    Science.gov (United States)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  7. Errors resulting from assuming opaque Lambertian clouds in TOMS ozone retrieval

    International Nuclear Information System (INIS)

    Liu, X.; Newchurch, M.J.; Loughman, R.; Bhartia, P.K.

    2004-01-01

    Accurate remote sensing retrieval of atmospheric constituents over cloudy areas is very challenging because of insufficient knowledge of cloud parameters. Cloud treatments are highly idealized in most retrieval algorithms. Using a radiative transfer model treating clouds as scattering media, we investigate the effects of assuming opaque Lambertian clouds and employing a Partial Cloud Model (PCM) on Total Ozone Mapping Spectrometer (TOMS) ozone retrievals, especially for tropical high-reflectivity clouds. Assuming angularly independent cloud reflection is good because the Ozone Retrieval Errors (OREs) are within 1.5% of the total ozone (i.e., within TOMS retrieval precision) when Cloud Optical Depth (COD)≥20. Because of Intra-Cloud Ozone Absorption ENhancement (ICOAEN), assuming opaque clouds can introduce large OREs even for optically thick clouds. For a water cloud of COD 40 spanning 2-12 km with 20.8 Dobson Unit (DU) ozone homogeneously distributed in the cloud, the ORE is 17.8 DU in the nadir view. The ICOAEN effect depends greatly on solar zenith angle, view zenith angle, and intra-cloud ozone amount and distribution. The TOMS PCM is good because negative errors from the cloud fraction being underestimated partly cancel other positive errors. At COD≤5, the TOMS algorithm retrieves approximately the correct total ozone because of compensating errors. With increasing COD up to 20-40, the overall positive ORE increases and is finally dominated by the ICOAEN effect. The ICOAEN effect is typically 5-13 DU on average over the Atlantic and Africa and 1-7 DU over the Pacific for tropical high-altitude (cloud top pressure ≤300 hPa) and high-reflectivity (reflectivity ≥ 80%) clouds. Knowledge of TOMS ozone retrieval errors has important implications for remote sensing of ozone/trace gases from other satellite instruments

  8. Defining modeling parameters for juniper trees assuming pleistocene-like conditions at the NTS

    International Nuclear Information System (INIS)

    Tarbox, S.R.; Cochran, J.R.

    1994-01-01

    This paper addresses part of Sandia National Laboratories' (SNL) efforts to assess the long-term performance of the Greater Confinement Disposal (GCD) facility located on the Nevada Test Site (NTS). Of issue is whether the GCD site complies with 40 CFR 191 standards set for transuranic (TRU) waste burial. SNL has developed a radionuclide transport model which can be used to assess TRU radionuclide movement away from the GCD facility. An earlier iteration of the model found that radionuclide uptake and release by plants is an important aspect of the system to consider. Currently, the shallow-rooted plants at the NTS do not pose a threat to the integrity of the GCD facility. However, the threat increases substantially it deeper-rooted woodland species migrate to the GCD facility, given a shift to a wetter climate. The model parameters discussed here will be included in the next model iteration which assumes a climate shift will provide for the growth of juniper trees at the GCD facility. Model parameters were developed using published data and wherever possible, data were taken from juniper and pinon-juniper studies that mirrored as many aspects of the GCD facility as possible

  9. Analysis and Comparison of Typical Models within Distribution Network Design

    DEFF Research Database (Denmark)

    Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.

    This paper investigates the characteristics of typical optimisation models within Distribution Network Design. During the paper fourteen models known from the literature will be thoroughly analysed. Through this analysis a schematic approach to categorisation of distribution network design models...... for educational purposes. Furthermore, the paper can be seen as a practical introduction to network design modelling as well as a being an art manual or recipe when constructing such a model....

  10. Some considerations on displacement assumed finite elements with the reduced numerical integration technique

    International Nuclear Information System (INIS)

    Takeda, H.; Isha, H.

    1981-01-01

    The paper is concerned with the displacement-assumed-finite elements by applying the reduced numerical integration technique in structural problems. The first part is a general consideration on the technique. Its purpose is to examine a variational interpretation of the finite element displacement formulation with the reduced integration technique in structural problems. The formulation is critically studied from a standpoint of the natural stiffness approach. It is shown that these types of elements are equivalent to a certain type of displacement and stress assumed mixed elements. The rank deficiency of the stiffness matrix of these elements is interpreted as a problem in the transformation from the natural system to a Cartesian system. It will be shown that a variational basis of the equivalent mixed formulation is closely related to the Hellinger-Reissner's functional. It is presented that for simple elements, e.g. bilinear quadrilateral plane stress and plate bending there are corresponding mixed elements from the functional. For relatively complex types of these elements, it is shown that they are equivalent to localized mixed elements from the Hellinger-Reissner's functional. In the second part, typical finite elements with the reduced integration technique are studied to demonstrate this equivalence. A bilinear displacement and rotation assumed shear beam element, a bilinear displacement assumed quadrilateral plane stress element and a bilinear deflection and rotation assumed quadrilateral plate bending element are examined to present equivalent mixed elements. Not only the theoretical consideration is presented but numerical studies are shown to demonstrate the effectiveness of these elements in practical analysis. (orig.)

  11. THOR: A New Higher-Order Closure Assumed PDF Subgrid-Scale Parameterization; Evaluation and Application to Low Cloud Feedbacks

    Science.gov (United States)

    Firl, G. J.; Randall, D. A.

    2013-12-01

    The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been

  12. Automatic conversational scene analysis in children with Asperger syndrome/high-functioning autism and typically developing peers.

    Science.gov (United States)

    Tavano, Alessandro; Pesarin, Anna; Murino, Vittorio; Cristani, Marco

    2014-01-01

    Individuals with Asperger syndrome/High Functioning Autism fail to spontaneously attribute mental states to the self and others, a life-long phenotypic characteristic known as mindblindness. We hypothesized that mindblindness would affect the dynamics of conversational interaction. Using generative models, in particular Gaussian mixture models and observed influence models, conversations were coded as interacting Markov processes, operating on novel speech/silence patterns, termed Steady Conversational Periods (SCPs). SCPs assume that whenever an agent's process changes state (e.g., from silence to speech), it causes a general transition of the entire conversational process, forcing inter-actant synchronization. SCPs fed into observed influence models, which captured the conversational dynamics of children and adolescents with Asperger syndrome/High Functioning Autism, and age-matched typically developing participants. Analyzing the parameters of the models by means of discriminative classifiers, the dialogs of patients were successfully distinguished from those of control participants. We conclude that meaning-free speech/silence sequences, reflecting inter-actant synchronization, at least partially encode typical and atypical conversational dynamics. This suggests a direct influence of theory of mind abilities onto basic speech initiative behavior.

  13. Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments

    Science.gov (United States)

    Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.

    2009-01-01

    The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…

  14. Enhanced air dispersion modelling at a typical Chinese nuclear power plant site: Coupling RIMPUFF with two advanced diagnostic wind models.

    Science.gov (United States)

    Liu, Yun; Li, Hong; Sun, Sida; Fang, Sheng

    2017-09-01

    An enhanced air dispersion modelling scheme is proposed to cope with the building layout and complex terrain of a typical Chinese nuclear power plant (NPP) site. In this modelling, the California Meteorological Model (CALMET) and the Stationary Wind Fit and Turbulence (SWIFT) are coupled with the Risø Mesoscale PUFF model (RIMPUFF) for refined wind field calculation. The near-field diffusion coefficient correction scheme of the Atmospheric Relative Concentrations in the Building Wakes Computer Code (ARCON96) is adopted to characterize dispersion in building arrays. The proposed method is evaluated by a wind tunnel experiment that replicates the typical Chinese NPP site. For both wind speed/direction and air concentration, the enhanced modelling predictions agree well with the observations. The fraction of the predictions within a factor of 2 and 5 of observations exceeds 55% and 82% respectively in the building area and the complex terrain area. This demonstrates the feasibility of the new enhanced modelling for typical Chinese NPP sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Automatic conversational scene analysis in children with Asperger syndrome/high-functioning autism and typically developing peers.

    Directory of Open Access Journals (Sweden)

    Alessandro Tavano

    Full Text Available Individuals with Asperger syndrome/High Functioning Autism fail to spontaneously attribute mental states to the self and others, a life-long phenotypic characteristic known as mindblindness. We hypothesized that mindblindness would affect the dynamics of conversational interaction. Using generative models, in particular Gaussian mixture models and observed influence models, conversations were coded as interacting Markov processes, operating on novel speech/silence patterns, termed Steady Conversational Periods (SCPs. SCPs assume that whenever an agent's process changes state (e.g., from silence to speech, it causes a general transition of the entire conversational process, forcing inter-actant synchronization. SCPs fed into observed influence models, which captured the conversational dynamics of children and adolescents with Asperger syndrome/High Functioning Autism, and age-matched typically developing participants. Analyzing the parameters of the models by means of discriminative classifiers, the dialogs of patients were successfully distinguished from those of control participants. We conclude that meaning-free speech/silence sequences, reflecting inter-actant synchronization, at least partially encode typical and atypical conversational dynamics. This suggests a direct influence of theory of mind abilities onto basic speech initiative behavior.

  16. Ex-plant consequence assessment for NUREG-1150: models, typical results, uncertainties

    International Nuclear Information System (INIS)

    Sprung, J.L.

    1988-01-01

    The assessment of ex-plant consequences for NUREG-1150 source terms was performed using the MELCOR Accident Consequence Code System (MACCS). This paper briefly discusses the following elements of MACCS consequence calculations: input data, phenomena modeled, computational framework, typical results, controlling phenomena, and uncertainties. Wherever possible, NUREG-1150 results will be used to illustrate the discussion. 28 references

  17. Importance of the habitat choice behavior assumed when modeling the effects of food and temperature on fish populations

    Science.gov (United States)

    Wildhaber, Mark L.; Lamberson, Peter J.

    2004-01-01

    Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.

  18. Parameterized Finite Element Modeling and Buckling Analysis of Six Typical Composite Grid Cylindrical Shells

    Science.gov (United States)

    Lai, Changliang; Wang, Junbiao; Liu, Chuang

    2014-10-01

    Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.

  19. Determinants of Occupational Gender Segregation : Work Values and Gender (A)Typical Occupational Preferences of Adolescents

    OpenAIRE

    Busch, Anne

    2011-01-01

    The study examines micro-level determinants of the occupational gender segregation, analyzing work values and their effects on gender (a)typical occupational preferences of adolescents. Human capital theory assumes that women develop higher preferences for a good work/life balance in youth, whereas men develop higher extrinsic work values. Socialization theory predicts that female adolescents form higher preferences for social work content. This gender typicality in work values is expected to...

  20. Aeroelastic Calculations Using CFD for a Typical Business Jet Model

    Science.gov (United States)

    Gibbons, Michael D.

    1996-01-01

    Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.

  1. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  2. A constraints-induced model of park choice

    NARCIS (Netherlands)

    Stemerding, M.P.; Oppewal, H.; Timmermans, H.J.P.

    1999-01-01

    Conjoint choice models have been used widely in the consumer-choice literature as an approach to measure and predict consumer-choice behavior. These models typically assume that consumer preferences and choice rules are independent from any constraints that might impact the behavior of interest.

  3. Looking around houses: attention to a model when drawing complex shapes in Williams syndrome and typical development.

    Science.gov (United States)

    Hudson, Kerry D; Farran, Emily K

    2013-09-01

    Drawings by individuals with Williams syndrome (WS) typically lack cohesion. The popular hypothesis is that this is a result of excessive focus on local-level detail at the expense of global configuration. In this study, we explored a novel hypothesis that inadequate attention might underpin drawing in WS. WS and typically developing (TD) non-verbal ability matched groups copied and traced a house figure comprised of geometric shapes. The house was presented on a computer screen for 5-s periods and participants pressed a key to re-view the model. Frequency of key-presses indexed the looks to the model. The order that elements were replicated was recorded to assess hierarchisation of elements. If a lack of attention to the model explained poor drawing performance, we expected participants with WS to look less frequently to the model than TD children when copying. If a local-processing preference underpins drawing in WS, more local than global elements would be produced. Results supported the first, but not second hypothesis. The WS group looked to the model infrequently, but global, not local, parts were drawn first, scaffolding local-level details. Both groups adopted a similar order of drawing and tracing of parts, suggesting typical, although delayed strategy-use in the WS group. Additionally both groups drew larger elements of the model before smaller elements, suggested a size-bias when drawing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  5. Assumed Probability Density Functions for Shallow and Deep Convection

    Directory of Open Access Journals (Sweden)

    Steven K Krueger

    2010-10-01

    Full Text Available The assumed joint probability density function (PDF between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model. The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence

  6. Inference of directional selection and mutation parameters assuming equilibrium.

    Science.gov (United States)

    Vogl, Claus; Bergman, Juraj

    2015-12-01

    In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Elastic-plastic and creep analyses by assumed stress finite elements

    International Nuclear Information System (INIS)

    Pian, T.H.H.; Spilker, R.L.; Lee, S.W.

    1975-01-01

    A formulation is presented of incremental finite element solutions for both initial stress and initial strain problems based on modified complementary energy principle with relaxed inter-element continuity requirement. The corresponding finite element model is the assumed stress hybrid model which has stress parameters in the interior of each element and displacements at the individual nodes as unknowns. The formulation includes an important consideration that the states of stress and strain and the beginning of each increment may not satisfy the equilibrium and compatibility equations. These imbalance and mismatch conditions all lead to correction terms for the equivalent nodal forces of the matrix equations. The initial stress method is applied to elastic-plastic analysis of structures. In this case the stress parameters for the individual elements can be eliminated resulting to a system of equations with only nodal displacements as unknowns. Two different complementary energy principles can be formulated, in one of which the equilibrium of the final state of stress is maintained while in the other the equilibrium of the stress increments is maintained. Each of these two different formulations can be combined with different iterative schemes to be used at each incremental steps of the elastic-plastic analysis. It is also indicated clearly that for the initial stress method the state of stress at the beginning of each increments is in general, not in equilibrium and an imbalance correction is needed. Results of a comprehensive evaluation of various solution procedures by the initial stress method using the assumed stress hybrid elements are presented. The example used is the static response of a thick wall cylinder of elastic-perfectly plastic material under internal pressure. Solid of revolution elements with rectangular cross sections are used

  8. An extended car-following model considering the acceleration derivative in some typical traffic environments

    Science.gov (United States)

    Zhou, Tong; Chen, Dong; Liu, Weining

    2018-03-01

    Based on the full velocity difference and acceleration car-following model, an extended car-following model is proposed by considering the vehicle’s acceleration derivative. The stability condition is given by applying the control theory. Considering some typical traffic environments, the results of theoretical analysis and numerical simulation show the extended model has a more actual acceleration of string vehicles than that of the previous models in starting process, stopping process and sudden brake. Meanwhile, the traffic jams more easily occur when the coefficient of vehicle’s acceleration derivative increases, which is presented by space-time evolution. The results confirm that the vehicle’s acceleration derivative plays an important role in the traffic jamming transition and the evolution of traffic congestion.

  9. Typical entanglement

    Science.gov (United States)

    Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio

    2013-05-01

    Let a pure state | ψ> be chosen randomly in an NM-dimensional Hilbert space, and consider the reduced density matrix ρ A of an N-dimensional subsystem. The bipartite entanglement properties of | ψ> are encoded in the spectrum of ρ A . By means of a saddle point method and using a "Coulomb gas" model for the eigenvalues, we obtain the typical spectrum of reduced density matrices. We consider the cases of an unbiased ensemble of pure states and of a fixed value of the purity. We finally obtain the eigenvalue distribution by using a statistical mechanics approach based on the introduction of a partition function.

  10. 39 CFR 3060.40 - Calculation of the assumed Federal income tax.

    Science.gov (United States)

    2010-07-01

    ... Federal income tax. (a) The assumed Federal income tax on competitive products income shall be based on the Postal Service theoretical competitive products enterprise income statement for the relevant year... 39 Postal Service 1 2010-07-01 2010-07-01 false Calculation of the assumed Federal income tax...

  11. Models for the estimation of diffuse solar radiation for typical cities in Turkey

    International Nuclear Information System (INIS)

    Bakirci, Kadir

    2015-01-01

    In solar energy applications, diffuse solar radiation component is required. Solar radiation data particularly in terms of diffuse component are not readily affordable, because of high price of measurements as well as difficulties in their maintenance and calibration. In this study, new empirical models for predicting the monthly mean diffuse solar radiation on a horizontal surface for typical cities in Turkey are established. Therefore, fifteen empirical models from studies in the literature are used. Also, eighteen diffuse solar radiation models are developed using long term sunshine duration and global solar radiation data. The accuracy of the developed models is evaluated in terms of different statistical indicators. It is found that the best performance is achieved for the third-order polynomial model based on sunshine duration and clearness index. - Highlights: • Diffuse radiation is given as a function of clearness index and sunshine fraction. • The diffuse radiation is an important parameter in solar energy applications. • The diffuse radiation measurement is for limited periods and it is very rare. • The new models can be used to estimate monthly average diffuse solar radiation. • The accuracy of the models is evaluated on the basis of statistical indicators

  12. Bowing-reactivity trends in EBR-II assuming zero-swelling ducts

    International Nuclear Information System (INIS)

    Meneghetti, D.

    1994-01-01

    Predicted trends of duct-bowing reactivities for the Experimental Breeder Reactor II (EBR-II) are correlated with predicted row-wise duct deflections assuming use of idealized zero-void-swelling subassembly ducts. These assume no irradiation induced swellings of ducts but include estimates of the effects of irradiation-creep relaxation of thermally induced bowing stresses. The results illustrate the manners in which at-power creeps may affect subsequent duct deflections at zero power and thereby the trends of the bowing component of a subsequent power reactivity decrement

  13. A review of typical thermal fatigue failure models for solder joints of electronic components

    Science.gov (United States)

    Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong

    2017-09-01

    For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.

  14. Evaluation of Clear-Sky Incoming Radiation Estimating Equations Typically Used in Remote Sensing Evapotranspiration Algorithms

    Directory of Open Access Journals (Sweden)

    Ted W. Sammis

    2013-09-01

    Full Text Available Net radiation is a key component of the energy balance, whose estimation accuracy has an impact on energy flux estimates from satellite data. In typical remote sensing evapotranspiration (ET algorithms, the outgoing shortwave and longwave components of net radiation are obtained from remote sensing data, while the incoming shortwave (RS and longwave (RL components are typically estimated from weather data using empirical equations. This study evaluates the accuracy of empirical equations commonly used in remote sensing ET algorithms for estimating RS and RL radiation. Evaluation is carried out through comparison of estimates and observations at five sites that represent different climatic regions from humid to arid. Results reveal (1 both RS and RL estimates from all evaluated equations well correlate with observations (R2 ≥ 0.92, (2 RS estimating equations tend to overestimate, especially at higher values, (3 RL estimating equations tend to give more biased values in arid and semi-arid regions, (4 a model that parameterizes the diffuse component of radiation using two clearness indices and a simple model that assumes a linear increase of atmospheric transmissivity with elevation give better RS estimates, and (5 mean relative absolute errors in the net radiation (Rn estimates caused by the use of RS and RL estimating equations varies from 10% to 22%. This study suggests that Rn estimates using recommended incoming radiation estimating equations could improve ET estimates.

  15. Investigating assumptions of crown archetypes for modelling LiDAR returns

    NARCIS (Netherlands)

    Calders, K.; Lewis, P.; Disney, M.; Verbesselt, J.; Herold, M.

    2013-01-01

    LiDAR has the potential to derive canopy structural information such as tree height and leaf area index (LAI), via models of the LiDAR signal. Such models often make assumptions regarding crown shape to simplify parameter retrieval and crown archetypes are typically assumed to contain a turbid

  16. Rational Approximations to Rational Models: Alternative Algorithms for Category Learning

    Science.gov (United States)

    Sanborn, Adam N.; Griffiths, Thomas L.; Navarro, Daniel J.

    2010-01-01

    Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models…

  17. Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders.

    Science.gov (United States)

    Shay, Blake; Weber, Robert J

    2015-11-01

    Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals.

  18. The heterogeneous heuristic modeling framework for inferring decision processes

    NARCIS (Netherlands)

    Zhu, W.; Timmermans, H.J.P.; Rasouli, S.; Timmermans, H.J.P.

    2015-01-01

    Purpose – Increasing evidence suggests that choice behaviour in real world may be guided by principles of bounded rationality as opposed to typically assumed fully rational behaviour, based on the principle of utility- maximization. Under such circumstances, conventional rational choice models

  19. How Afghanistan Can Assume Ownership for the Ongoing Conflict

    National Research Council Canada - National Science Library

    Horn, Sr, John M

    2008-01-01

    In view of United States global commitments and larger Global War on Terror (GWOT) strategy, the ultimate security goal in Afghanistan must be for the Afghans to assume ownership of the counterinsurgency struggle...

  20. Definition of a parameter for a typical specific absorption rate under real boundary conditions of cellular phones in a GSM networkd

    Science.gov (United States)

    Gerhardt, D.

    2003-05-01

    Using cellular phones the specific absorption rate (SAR) as a physical value must observe established and internationally defined levels to guarantee human protection. To assess human protection it is necessary to guarantee safety under worst-case conditions (especially maximum transmitting power) using cellular phones. To evaluate the exposure to electromagnetic fields under normal terms of use of cellular phones the limitations of the specific absorption rate must be pointed out. In a mobile radio network normal terms of use of cellular phones, i.e. in interconnection with a fixed radio transmitter of a mobile radio network, power control of the cellular phone as well as the antenna diagram regarding a head phantom are also significant for the real exposure. Based on the specific absorption rate, the antenna diagram regarding a head phantom and taking into consideration the power control a new parameter, the typical absorption rate (SARtyp), is defined in this contribution. This parameter indicates the specific absorption rate under average normal conditions of use. Constant radio link attenuation between a cellular phone and a fixed radio transmitter for all mobile models tested was assumed in order to achieve constant field strength at the receiving antenna of the fixed radio transmitter as a result of power control. The typical specific absorption rate is a characteristic physical value of every mobile model. The typical absorption rate was calculated for 16 different mobile models and compared with the absorption rate at maximum transmitting power. The results confirm the relevance of the definition of this parameter (SARtyp) as opposed to the specific absorption rate as a competent and applicable method to establish the real mean exposure from a cellular phone in a mobile radio network. The typical absorption rate provides a parameter to assess electromagnetic fields of a cellular phone that is more relevant to the consumer.

  1. From the Kochen-Specker theorem to noncontextuality inequalities without assuming determinism.

    Science.gov (United States)

    Kunjwal, Ravi; Spekkens, Robert W

    2015-09-11

    The Kochen-Specker theorem demonstrates that it is not possible to reproduce the predictions of quantum theory in terms of a hidden variable model where the hidden variables assign a value to every projector deterministically and noncontextually. A noncontextual value assignment to a projector is one that does not depend on which other projectors-the context-are measured together with it. Using a generalization of the notion of noncontextuality that applies to both measurements and preparations, we propose a scheme for deriving inequalities that test whether a given set of experimental statistics is consistent with a noncontextual model. Unlike previous inequalities inspired by the Kochen-Specker theorem, we do not assume that the value assignments are deterministic and therefore in the face of a violation of our inequality, the possibility of salvaging noncontextuality by abandoning determinism is no longer an option. Our approach is operational in the sense that it does not presume quantum theory: a violation of our inequality implies the impossibility of a noncontextual model for any operational theory that can account for the experimental observations, including any successor to quantum theory.

  2. Household time allocation model based on a group utility function

    NARCIS (Netherlands)

    Zhang, J.; Borgers, A.W.J.; Timmermans, H.J.P.

    2002-01-01

    Existing activity-based models typically assume an individual decision-making process. In household decision-making, however, interaction exists among household members and their activities during the allocation of the members' limited time. This paper, therefore, attempts to develop a new household

  3. Naturalness of CP Violation in the Standard Model

    International Nuclear Information System (INIS)

    Gibbons, Gary W.; Gielen, Steffen; Pope, C. N.; Turok, Neil

    2009-01-01

    We construct a natural measure on the space of Cabibbo-Kobayashi-Maskawa matrices in the standard model, assuming the fermion mass matrices are randomly selected from a distribution which incorporates the observed quark mass hierarchy. This measure allows us to assess the likelihood of Jarlskog's CP violation parameter J taking its observed value J≅3x10 -5 . We find that the observed value, while well below the mathematically allowed maximum, is in fact typical once the observed quark masses are assumed

  4. 49 CFR 568.7 - Requirements for manufacturers who assume legal responsibility for a vehicle.

    Science.gov (United States)

    2010-10-01

    ... MANUFACTURED IN TWO OR MORE STAGES § 568.7 Requirements for manufacturers who assume legal responsibility for a vehicle. (a) If an incomplete vehicle manufacturer assumes legal responsibility for all duties and... 49 CFR 567.5(f). (b) If an intermediate manufacturer of a vehicle assumes legal responsibility for...

  5. A comparison of two typical multicyclic models used to forecast the world's conventional oil production

    International Nuclear Information System (INIS)

    Wang Jianliang; Feng Lianyong; Zhao Lin; Snowden, Simon; Wang Xu

    2011-01-01

    This paper introduces two typical multicyclic models: the Hubbert model and the Generalized Weng model. The model-solving process of the two is expounded, and it provides the basis for an empirical analysis of the world's conventional oil production. The results for both show that the world's conventional oil (crude+NGLs) production will reach its peak in 2011 with a production of 30 billion barrels (Gb). In addition, the forecasting effects of these two models, given the same URR are compared, and the intrinsic characteristics of these two models are analyzed. This demonstrates that for specific criteria the multicyclic Generalized Weng model is an improvement on the multicyclic Hubbert model. Finally, based upon the resultant forecast for the world's conventional oil, some suggestions are proposed for China's policy makers. - Highlights: ► Hubbert model and Generalized Weng model are introduced and compared in this article. ► We conclude each model's characteristic and scopes and conditions of applicable. ► We get the same peak production and time of world's oil by applying two models. ► Multicyclic Generalized Weng model is proven slightly better than Hubbert model.

  6. Modeling a typical winter-time dust event over the Arabian Peninsula and the Red Sea

    KAUST Repository

    Kalenderski, Stoitchko

    2013-02-20

    We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during a typical winter-time dust event that occurred in January 2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ?2.4 Tg day-1 and ?1.5 Tg day-1, corresponding to two periods with the highest aerosol optical depth that were well captured by ground-and satellite-based observations. The model predicted that the dust plume was thick, extensive, and mixed in a deep boundary layer at an altitude of 3-4 km. Its spatial distribution was modeled to be consistent with typical spatial patterns of dust emissions. We utilized MODIS-Aqua and Solar Village AERONET measurements of the aerosol optical depth (AOD) to evaluate the radiative impact of aerosols. Our results clearly indicated that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reached 100 W m-2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future projections and for better understanding of the climate and ecological history of the Red Sea.

  7. New photoionization models of intergalactic clouds

    Science.gov (United States)

    Donahue, Megan; Shull, J. M.

    1991-01-01

    New photoionization models of optically thin low-density intergalactic gas at constant pressure, photoionized by QSOs, are presented. All ion stages of H, He, C, N, O, Si, and Fe, plus H2 are modeled, and the column density ratios of clouds at specified values of the ionization parameter of n sub gamma/n sub H and cloud metallicity are predicted. If Ly-alpha clouds are much cooler than the previously assumed value, 30,000 K, the ionization parameter must be very low, even with the cooling contribution of a trace component of molecules. If the clouds cool below 6000 K, their final equilibrium must be below 3000 K, owing to the lack of a stable phase between 6000 and 3000 K. If it is assumed that the clouds are being irradiated by an EUV power-law continuum typical of WSOs, with J0 = 10 exp -21 ergs/s sq cm Hz, typical cloud thicknesses along the line of sight that are much smaller than would be expected from shocks, thermal instabilities, or gravitational collapse are derived.

  8. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  9. On the sources of technological change: What do the models assume?

    International Nuclear Information System (INIS)

    Clarke, Leon; Weyant, John; Edmonds, Jae

    2008-01-01

    It is widely acknowledged that technological change can substantially reduce the costs of stabilizing atmospheric concentrations of greenhouse gases. This paper discusses the sources of technological change and the representations of these sources in formal models of energy and the environment. The paper distinguishes between three major sources of technological change-R and D, learning-by-doing and spillovers-and introduces a conceptual framework for linking modeling approaches to assumptions about these real-world sources. A selective review of modeling approaches, including those employing exogenous technological change, suggests that most formal models have meaningful real-world interpretations that focus on a subset of possible sources of technological change while downplaying the roles of others

  10. Estimating option values of solar radiation management assuming that climate sensitivity is uncertain.

    Science.gov (United States)

    Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa

    2016-05-24

    Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion.

  11. Multiphase Microfluidics The Diffuse Interface Model

    CERN Document Server

    2012-01-01

    Multiphase flows are typically described assuming that the different phases are separated by a sharp interface, with appropriate boundary conditions. This approach breaks down whenever the lengthscale of the phenomenon that is being studied is comparable with the real interface thickness, as it happens, for example, in the coalescence and breakup of bubbles and drops, the wetting and dewetting of solid surfaces and, in general, im micro-devices. The diffuse interface model resolves these probems by assuming that all quantities can vary continuously, so that interfaces have a non-zero thickness, i.e. they are "diffuse". The contributions in this book review the theory and describe some relevant applications of the diffuse interface model for one-component, two-phase fluids and for liquid binary mixtures, to model multiphase flows in confined geometries.

  12. Modeling a typical winter-time dust event over the Arabian Peninsula and the Red Sea

    Directory of Open Access Journals (Sweden)

    S. Kalenderski

    2013-02-01

    Full Text Available We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during a typical winter-time dust event that occurred in January 2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ~2.4 Tg day−1 and ~1.5 Tg day−1, corresponding to two periods with the highest aerosol optical depth that were well captured by ground- and satellite-based observations. The model predicted that the dust plume was thick, extensive, and mixed in a deep boundary layer at an altitude of 3–4 km. Its spatial distribution was modeled to be consistent with typical spatial patterns of dust emissions. We utilized MODIS-Aqua and Solar Village AERONET measurements of the aerosol optical depth (AOD to evaluate the radiative impact of aerosols. Our results clearly indicated that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reached 100 W m−2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future projections and for better understanding of the climate and ecological history of the Red Sea.

  13. Utility-maximizing model of household time use for independent, shared, and allocated activities incorporating group decision mechanisms

    NARCIS (Netherlands)

    Zhang, J.; Timmermans, H.J.P.; Borgers, A.W.J.

    2002-01-01

    Existing activity-based models of transport demand typically assume an individual decision-making process. The focus on theories of individual decision making may be partially due to the lack of behaviorally oriented modeling methodologies for group decision making. Therefore, an attempt has been

  14. The work-averse cyber attacker model : theory and evidence from two million attack signatures

    NARCIS (Netherlands)

    Allodi, L.; Massacci, F.; Williams, J.

    The typical cyber attacker is assumed to be all powerful and to exploit all possible vulnerabilities. In this paper we present, and empirically validate, a novel and more realistic attacker model. The intuition of our model is that an attacker will optimally choose whether to act and weaponize a new

  15. Modeling the Effects of Viscosity and Thermal Conduction on Acoustic Propagation in Rigid Tubes with Various Cross-Sectional Shapes

    DEFF Research Database (Denmark)

    Christensen, René

    2011-01-01

    When modeling acoustics with viscothermal effects included, typically of importance for narrow tubes and slits, one can often use the so-called low reduced frequency model. With this model a characteristic length is assumed for which the sound pressure is constant. For example for a circular cyli...

  16. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children

    Science.gov (United States)

    Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371

  17. Modelling object typicality in description logics - [Workshop on Description Logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-07-01

    Full Text Available than those not in C. This is a technical construction which allows us to order the entire domain, instead of only the members of C. This leads us to take as starting point a finite set of preference orders f j : j 2 J g on objects in the application... domain, with index set J . If j prefers any object in C to any object outside of C, we call j a C-order. To ensure that the subsumption relations eventually generated are rational [4, 14], we assume the preference orders to be a modular partial...

  18. What is typical is good: The influence of face typicality on perceived trustworthiness

    NARCIS (Netherlands)

    Sofer, C.; Dotsch, R.; Wigboldus, D.H.J.; Todorov, A.T.

    2015-01-01

    The role of face typicality in face recognition is well established, but it is unclear whether face typicality is important for face evaluation. Prior studies have focused mainly on typicality's influence on attractiveness, although recent studies have cast doubt on its importance for attractiveness

  19. Assuming measurement invariance of background indicators in international comparative educational achievement studies: a challenge for the interpretation of achievement differences

    Directory of Open Access Journals (Sweden)

    Heike Wendt

    2017-03-01

    Full Text Available Abstract Background Large-scale cross-national studies designed to measure student achievement use different social, cultural, economic and other background variables to explain observed differences in that achievement. Prior to their inclusion into a prediction model, these variables are commonly scaled into latent background indices. To allow cross-national comparisons of the latent indices, measurement invariance is assumed. However, it is unclear whether the assumption of measurement invariance has some influence on the results of the prediction model, thus challenging the reliability and validity of cross-national comparisons of predicted results. Methods To establish the effect size attributed to different degrees of measurement invariance, we rescaled the ‘home resource for learning index’ (HRL for the 37 countries ( $$n=166,709$$ n = 166 , 709 students that participated in the IEA’s combined ‘Progress in International Reading Literacy Study’ (PIRLS and ‘Trends in International Mathematics and Science Study’ (TIMSS assessments of 2011. We used (a two different measurement models [one-parameter model (1PL and two-parameter model (2PL] with (b two different degrees of measurement invariance, resulting in four different models. We introduced the different HRL indices as predictors in a generalized linear mixed model (GLMM with mathematics achievement as the dependent variable. We then compared three outcomes across countries and by scaling model: (1 the differing fit-values of the measurement models, (2 the estimated discrimination parameters, and (3 the estimated regression coefficients. Results The least restrictive measurement model fitted the data best, and the degree of assumed measurement invariance of the HRL indices influenced the random effects of the GLMM in all but one country. For one-third of the countries, the fixed effects of the GLMM also related to the degree of assumed measurement invariance. Conclusion The

  20. A Method for The Assessing of Reliability Characteristics Relevant to an Assumed Position-Fixing Accuracy in Navigational Positioning Systems

    Directory of Open Access Journals (Sweden)

    Specht Cezary

    2016-09-01

    Full Text Available This paper presents a method which makes it possible to determine reliability characteristics of navigational positioning systems, relevant to an assumed value of permissible error in position fixing. The method allows to calculate: availability , reliability as well as operation continuity of position fixing system for an assumed, determined on the basis of formal requirements - both worldwide and national, position-fixing accuracy. The proposed mathematical model allows to satisfy, by any navigational positioning system, not only requirements as to position-fixing accuracy of a given navigational application (for air , sea or land traffic but also the remaining characteristics associated with technical serviceability of a system.

  1. A New Concept for Counter-Checking of Assumed CPM Pairs

    Science.gov (United States)

    Knapp, Wilfried; Nanson, John

    2017-01-01

    The inflation of “newly discovered” CPM pairs makes it necessary to develop an approach for a solid concept for counter-checking assumed CPM pairs with the target to identify false positives. Such a concept is presented in this report.

  2. Relativistic quarkonium model with retardation effect, 1

    International Nuclear Information System (INIS)

    Ito, Hitoshi

    1990-01-01

    A new relativistic two-body equation is proposed which has the charge-conjugation symmetry. The renormalization of the wave function at the origin (WFO) is done by incorporating the corresponding vertex equation. By using this model, the heavy-quarkonium phenomenology is developed putting emphasis on the short-distance interaction. The typical scale of the distance restricting the applicability of the ladder model for the mass spectra is found to be 0.13 fm: By assuming the equivalent high-momentum cutoff for the gluonic correction, good results are obtained for the charmonium masses. The improved fine-splittings of the bb-bar states are obtained by inclusion of the retardation. Leptonic decay rates are predicted by assuming the renormalized WFO reduced by another high-momentum cutoff. (author)

  3. Accessibility versus accuracy in retrieving spatial memory: evidence for suboptimal assumed headings.

    Science.gov (United States)

    Yerramsetti, Ashok; Marchette, Steven A; Shelton, Amy L

    2013-07-01

    Orientation dependence in spatial memory has often been interpreted in terms of accessibility: Object locations are encoded relative to a reference orientation that affords the most accurate access to spatial memory. An open question, however, is whether people naturally use this "preferred" orientation whenever recalling the space. We tested this question by asking participants to locate buildings on a familiar campus from various imagined locations, without specifying the heading to be assumed. We then used these pointing judgments to infer the approximate heading participants assumed at each location. Surprisingly, each location showed a unique assumed heading that was consistent across participants and seemed to reflect episodic or visual properties of the space. This result suggests that although locations are encoded relative to a reference orientation, other factors may influence how people choose to access the stored information and whether they appeal to long-term spatial memory or other more sensory-based stores. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)

    Science.gov (United States)

    Arenson, Ethan A.; Karabatsos, George

    2017-01-01

    Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…

  5. Research on the recycling industry development model for typical exterior plastic components of end-of-life passenger vehicle based on the SWOT method.

    Science.gov (United States)

    Zhang, Hongshen; Chen, Ming

    2013-11-01

    In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. 13 CFR 120.1718 - SBA's right to assume Seller's responsibilities.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false SBA's right to assume Seller's responsibilities. 120.1718 Section 120.1718 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Establishment of SBA Secondary Market Guarantee Program for First Lien Position 504 Loan Pools...

  7. Assumed Probability Density Functions for Shallow and Deep Convection

    OpenAIRE

    Steven K Krueger; Peter A Bogenschutz; Marat Khairoutdinov

    2010-01-01

    The assumed joint probability density function (PDF) between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS) parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PD...

  8. Jobs, sex, love and lifestyle: when nonstutterers assume the roles of stutterers.

    Science.gov (United States)

    Zhang, Jianliang; Saltuklaroglu, Tim; Hough, Monica; Kalinowski, Joseph

    2009-01-01

    This study assessed the impact of stuttering via a questionnaire in which fluent individuals were asked to assume the mindset of persons who stutter (PWS) in various life aspects, including vocation, romance, daily activities, friends/social life, family and general lifestyle. The perceived impact of stuttering through the mind's eyes of nonstutterers is supposed to reflect respondents' abilities to impart 'theory of mind' in addressing social penalties related to stuttering. Ninety-one university students answered a questionnaire containing 56 statements on a 7-point Likert scale. Forty-four participants (mean age = 20.4, SD = 4.4) were randomly selected to assume a stuttering identity and 47 respondents (mean age = 20.5, SD = 3.1) to assume their normally fluent identity. Significant differences between groups were found in more than two thirds of items regarding employment, romance, and daily activities, and in fewer than half of items regarding family, friend/social life, and general life style (p role of PWS, are capable of at least temporarily feeling the negative impact of stuttering. Copyright 2008 S. Karger AG, Basel.

  9. Investigating the probability of detection of typical cavity shapes through modelling and comparison of geophysical techniques

    Science.gov (United States)

    James, P.

    2011-12-01

    With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance

  10. PTL: A Propositional Typicality Logic

    CSIR Research Space (South Africa)

    Booth, R

    2012-09-01

    Full Text Available consequence relations first studied by Lehmann and col- leagues in the 90?s play a central role in nonmonotonic reasoning [13, 14]. This has been the case due to at least three main reasons. Firstly, they are based on semantic constructions that are elegant...) j ; 6j : ^ j PTL: A Propositional Typicality Logic 3 The semantics of (propositional) rational consequence is in terms of ranked models. These are partially ordered structures in which the ordering is modular. Definition 1. Given a set S...

  11. 24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Is an Indian tribe required to assume environmental review responsibilities? 1000.20 Section 1000.20 Housing and Urban Development... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is an...

  12. Generation of typical solar radiation data for different climates of China

    International Nuclear Information System (INIS)

    Zang, Haixiang; Xu, Qingshan; Bian, Haihong

    2012-01-01

    In this study, typical solar radiation data are generated from both measured data and synthetic generation for 35 stations in six different climatic zones of China. (1) By applying the measured weather data during at least 10 years from 1994 to 2009, typical meteorological years (TMYs) for 35 cities are generated using the Finkelstein–Schafer statistical method. The cumulative distribution function (CDF) of daily global solar radiation (DGSR) for each year is compared with the CDF of DGSR for the long-term years in six different climatic stations (Sanya, Shanghai, Zhengzhou, Harbin, Mohe and Lhasa). The daily global solar radiation as typical data obtained from the TMYs are presented in the Table. (2) Based on the recorded global radiation data from at least 10 years, a new daily global solar radiation model is developed with a sine and cosine wave (SCW) equation. The results of the proposed model and other empirical regression models are compared with measured data using different statistical indicators. It is found that solar radiation data, calculated by the new model, are superior to these from other empirical models at six typical climatic zones. In addition, the novel SCW model is tested and applied for 35 stations in China. -- Highlights: ► Both TMY method and synthetic generation are used to generate solar radiation data. ► The latest and accurate long term weather data in six different climates are applied. ► TMYs using new weighting factors of 8 weather indices for 35 regions are obtained. ► A new sine and cosine wave model is proposed and utilized for 35 major stations. ► Both TMY method and the proposed regression model perform well on monthly bases.

  13. Plutonium-239 production rate study using a typical fusion reactor

    International Nuclear Information System (INIS)

    Faghihi, F.; Havasi, H.; Amin-Mozafari, M.

    2008-01-01

    The purpose of the present paper is to compute fissile 239 Pu material by supposed typical fusion reactor operation to make the fuel requirement for other purposes (e.g. MOX fissile fuel, etc.). It is assumed that there is a fusion reactor has a cylindrical geometry and uses uniformly distributed deuterium-tritium as fuel so that neutron wall load is taken at 10(MW)/(m 2 ) . Moreover, the reactor core is surrounded by six suggested blankets to make best performance of the physical conditions described herein. We determined neutron flux in each considered blanket as well as tritium self-sufficiency using two groups neutron energy and then computation is followed by the MCNP-4C code. Finally, material depletion according to a set of dynamical coupled differential equations is solved to estimate 239 Pu production rate. Produced 239 Pu is compared with two typical fission reactors to find performance of plutonium breeding ratio in the case of the fusion reactor. We found that 0.92% of initial U is converted into fissile Pu by our suggested fusion reactor with thermal power of 3000 MW. For comparison, 239 Pu yield of suggested large scale PWR is about 0.65% and for LMFBR is close to 1.7%. The results show that the fusion reactor has an acceptable efficiency for Pu production compared with a large scale PWR fission reactor type

  14. IMPORTANCE OF TEMPERATURE IN MODELLING PCB BIOACCUMULATION IN THE LAKE MICHIGAN FOOD WEB

    Science.gov (United States)

    In most food web models, the exposure temperature of a food web is typically defined using a single spatial compartment. This essentially assumes that the predator and prey are exposed to the same temperature. However, in a large water body such as Lake Michigan, due to the spati...

  15. Some notes on problematic issues in DSGE models

    Directory of Open Access Journals (Sweden)

    Slanicay Martin

    2016-01-01

    Full Text Available We review some of the problematic issues in DSGE models, which are currently much discussed in the economics profession. All of these issues are concerned with the DSGE models’ (inability to match aspects of macroeconomic variables’ observed behaviour. The optimizing agents framework implies that Ricardian equivalence typically holds, which is clearly at odds with the empirical evidence. A distinguishing feature of DSGE models is the assumption that structural parameters are invariant to policy changes. We argue that not all of them can be considered independent from economic policy. It is typical for DSGE models that agents form rational expectations, which can be considered unrealistic. The typical procedure for estimating a DSGE model is to use revised data. As some empirical studies suggest, a model’s behaviour may be different if real-time data are considered. It is also usually assumed that the monetary authority uses the interest rate as a tool of monetary policy. Nowadays, nominal interest rates are close to zero in many economies and cannot be lowered further.

  16. Asynchronous variational integration using continuous assumed gradient elements.

    Science.gov (United States)

    Wolff, Sebastian; Bucher, Christian

    2013-03-01

    Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.

  17. How typical are 'typical' tremor characteristics? : Sensitivity and specificity of five tremor phenomena

    NARCIS (Netherlands)

    van der Stouwe, A. M. M.; Elting, J. W.; van der Hoeven, J. H.; van Laar, T.; Leenders, K. L.; Maurits, N. M.; Tijssen, M. Aj.

    Introduction: Distinguishing between different tremor disorders can be challenging. Some tremor disorders are thought to have typical tremor characteristics: the current study aims to provide sensitivity and specificity for five 'typical' tremor phenomena. Methods: Retrospectively, we examined 210

  18. Simplified CFD model of coolant channels typical of a plate-type fuel element: an exhaustive verification of the simulations

    Energy Technology Data Exchange (ETDEWEB)

    Mantecón, Javier González; Mattar Neto, Miguel, E-mail: javier.mantecon@ipen.br, E-mail: mmattar@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The use of parallel plate-type fuel assemblies is common in nuclear research reactors. One of the main problems of this fuel element configuration is the hydraulic instability of the plates caused by the high flow velocities. The current work is focused on the hydrodynamic characterization of coolant channels typical of a flat-plate fuel element, using a numerical model developed with the commercial code ANSYS CFX. Numerical results are compared to accurate analytical solutions, considering two turbulence models and three different fluid meshes. For this study, the results demonstrated that the most suitable turbulence model is the k-ε model. The discretization error is estimated using the Grid Convergence Index method. Despite its simplicity, this model generates precise flow predictions. (author)

  19. Simplified CFD model of coolant channels typical of a plate-type fuel element: an exhaustive verification of the simulations

    International Nuclear Information System (INIS)

    Mantecón, Javier González; Mattar Neto, Miguel

    2017-01-01

    The use of parallel plate-type fuel assemblies is common in nuclear research reactors. One of the main problems of this fuel element configuration is the hydraulic instability of the plates caused by the high flow velocities. The current work is focused on the hydrodynamic characterization of coolant channels typical of a flat-plate fuel element, using a numerical model developed with the commercial code ANSYS CFX. Numerical results are compared to accurate analytical solutions, considering two turbulence models and three different fluid meshes. For this study, the results demonstrated that the most suitable turbulence model is the k-ε model. The discretization error is estimated using the Grid Convergence Index method. Despite its simplicity, this model generates precise flow predictions. (author)

  20. What Is Typical Is Good : The Influence of Face Typicality on Perceived Trustworthiness

    NARCIS (Netherlands)

    Sofer, Carmel; Dotsch, Ron; Wigboldus, Daniel H J; Todorov, Alexander

    2015-01-01

    The role of face typicality in face recognition is well established, but it is unclear whether face typicality is important for face evaluation. Prior studies have focused mainly on typicality’s influence on attractiveness, although recent studies have cast doubt on its importance for attractiveness

  1. Pervasive drought legacies in forest ecosystems and their implications for carbon cycle models

    Science.gov (United States)

    W. R. L. Anderegg; C. Schwalm; F. Biondi; J. J. Camarero; G. Koch; M. Litvak; K. Ogle; J. D. Shaw; E. Shevliakova; A. P. Williams; A. Wolf; E. Ziaco; S. Pacala

    2015-01-01

    The impacts of climate extremes on terrestrial ecosystems are poorly understood but important for predicting carbon cycle feedbacks to climate change. Coupled climate-carbon cycle models typically assume that vegetation recovery from extreme drought is immediate and complete, which conflicts with the understanding of basic plant physiology. We examined the recovery of...

  2. Plutonium-239 production rate study using a typical fusion reactor

    Energy Technology Data Exchange (ETDEWEB)

    Faghihi, F. [Research Center for Radiation Protection, Shiraz University, Shiraz (Iran, Islamic Republic of)], E-mail: faghihif@shirazu.ac.ir; Havasi, H.; Amin-Mozafari, M. [Department of Nuclear Engineering, School of Engineering, Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of)

    2008-05-15

    The purpose of the present paper is to compute fissile {sup 239}Pu material by supposed typical fusion reactor operation to make the fuel requirement for other purposes (e.g. MOX fissile fuel, etc.). It is assumed that there is a fusion reactor has a cylindrical geometry and uses uniformly distributed deuterium-tritium as fuel so that neutron wall load is taken at 10(MW)/(m{sup 2}) . Moreover, the reactor core is surrounded by six suggested blankets to make best performance of the physical conditions described herein. We determined neutron flux in each considered blanket as well as tritium self-sufficiency using two groups neutron energy and then computation is followed by the MCNP-4C code. Finally, material depletion according to a set of dynamical coupled differential equations is solved to estimate {sup 239}Pu production rate. Produced {sup 239}Pu is compared with two typical fission reactors to find performance of plutonium breeding ratio in the case of the fusion reactor. We found that 0.92% of initial U is converted into fissile Pu by our suggested fusion reactor with thermal power of 3000 MW. For comparison, {sup 239}Pu yield of suggested large scale PWR is about 0.65% and for LMFBR is close to 1.7%. The results show that the fusion reactor has an acceptable efficiency for Pu production compared with a large scale PWR fission reactor type.

  3. MASADA: A MODELING AND SIMULATION AUTOMATED DATA ANALYSIS FRAMEWORK FOR CONTINUOUS DATA-INTENSIVE VALIDATION OF SIMULATION MODELS

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  4. MASADA: A Modeling and Simulation Automated Data Analysis framework for continuous data-intensive validation of simulation models

    CERN Document Server

    Foguelman, Daniel Jacob; The ATLAS collaboration

    2016-01-01

    Complex networked computer systems are usually subjected to upgrades and enhancements on a continuous basis. Modeling and simulation of such systems helps with guiding their engineering processes, in particular when testing candi- date design alternatives directly on the real system is not an option. Models are built and simulation exercises are run guided by specific research and/or design questions. A vast amount of operational conditions for the real system need to be assumed in order to focus on the relevant questions at hand. A typical boundary condition for computer systems is the exogenously imposed workload. Meanwhile, in typical projects huge amounts of monitoring information are logged and stored with the purpose of studying the system’s performance in search for improvements. Also research questions change as systems’ operational conditions vary throughout its lifetime. This context poses many challenges to determine the validity of simulation models. As the behavioral empirical base of the sys...

  5. The effects of typical and atypical antipsychotics on the electrical activity of the brain in a rat model

    Directory of Open Access Journals (Sweden)

    Oytun Erbaş

    2013-09-01

    Full Text Available Objective: Antipsychotic drugs are known to have strongeffect on the bioelectric activity in the brain. However,some studies addressing the changes on electroencephalography(EEG caused by typical and atypical antipsychoticdrugs are conflicting. We aimed to compare the effectsof typical and atypical antipsychotics on the electricalactivity in the brain via EEG recordings in a rat model.Methods: Thirty-two Sprague Dawley adult male ratswere used in the study. The rats were divided into fivegroups, randomly (n=7, for each group. The first groupwas used as control group and administered 1 ml/kg salineintraperitoneally (IP. Haloperidol (1 mg/kg (group 2,chlorpromazine (5 mg/kg (group 3, olanzapine (1 mg/kg(group 4, ziprasidone (1 mg/ kg (group 5 were injectedIP for five consecutive days. Then, EEG recordings ofeach group were taken for 30 minutes.Results: The percentages of delta and theta waves inhaloperidol, chlorpromazine, olanzapine and ziprasidonegroups were found to have a highly significant differencecompared with the saline administration group (p<0.001.The theta waves in the olanzapine and ziprasidonegroups were increased compared with haloperidol andchlorpromazine groups (p<0.05.Conclusion: The typical and atypical antipsychotic drugsmay be risk factor for EEG abnormalities. This studyshows that antipsychotic drugs should be used with caution.J Clin Exp Invest 2013; 4 (3: 279-284Key words: Haloperidol, chlorpromazine, olanzapine,ziprasidone, EEG, rat

  6. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    OpenAIRE

    Naikwad, S. N.; Dudul, S. V.

    2009-01-01

    A focused time lagged recurrent neural network (FTLR NN) with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes tempora...

  7. Is our Universe typical?

    International Nuclear Information System (INIS)

    Gurzadyan, V.G.

    1988-01-01

    The problem of typicalness of the Universe - as a dynamical system possessing both regular and chaotic regions of positive measure of phase space, is raised and discussed. Two dynamical systems are considered: 1) The observed Universe as a hierarchy of systems of N graviting bodies; 2) (3+1)-manifold with matter evolving to Wheeler-DeWitt equation in superspace with Hawking boundary condition of compact metrics. It is shown that the observed Universe is typical. There is no unambiguous answer for the second system yet. If it is typical too then the same present state of the Universe could have been originated from an infinite number of different initial conditions the restoration of which is practically impossible at present. 35 refs.; 2 refs

  8. Sensitivity of the Speech Intelligibility Index to the Assumed Dynamic Range

    Science.gov (United States)

    Jin, In-Ki; Kates, James M.; Arehart, Kathryn H.

    2017-01-01

    Purpose: This study aims to evaluate the sensitivity of the speech intelligibility index (SII) to the assumed speech dynamic range (DR) in different languages and with different types of stimuli. Method: Intelligibility prediction uses the absolute transfer function (ATF) to map the SII value to the predicted intelligibility for a given stimuli.…

  9. Typicals/Típicos

    Directory of Open Access Journals (Sweden)

    Silvia Vélez

    2004-01-01

    Full Text Available Typicals is a series of 12 colour photographs digitally created from photojournalistic images from Colombia combined with "typical" craft textiles and text from guest writers. Typicals was first exhibited as photographs 50cm x 75cm in size, each with their own magnifying glass, at the Contemporary Art Space at Gorman House in Canberra, Australia, in 2000. It was then exhibited in "Feedback: Art Social Consciousness and Resistance" at Monash University Museum of Art in Melbourne, Australia, from March to May 2003. From May to June 2003 it was exhibited at the Museo de Arte de la Universidad Nacional de Colombia Santa Fé Bogotá, Colombia. In its current manifestation the artwork has been adapted from the catalogue of the museum exhibitions. It is broken up into eight pieces corresponding to the contributions of the writers. The introduction by Sylvia Vélez is the PDF file accessible via a link below this abstract. The other seven PDF files are accessible via the 'Supplementary Files' section to the left of your screen. Please note that these files are around 4 megabytes each, so it may be difficult to access them from a dial-up connection.

  10. The impact of assumed knowledge entry standards on undergraduate mathematics teaching in Australia

    Science.gov (United States)

    King, Deborah; Cattlin, Joann

    2015-10-01

    Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who may be returning to study, or who are changing career paths but do not have particular prerequisite study, now have more flexible pathways. However, there is mounting evidence to indicate that there are also significant negative impacts associated with assumed knowledge approaches, with large numbers of students enrolling in degrees without the stated assumed knowledge. For students, there are negative impacts on pass rates and retention rates and limitations to pathways within particular degrees. For institutions, the necessity to offer additional mathematics subjects at a lower level than normal and more support services for under-prepared students impacts on workloads and resources. In this paper, we discuss early research from the First Year in Maths project, which begins to shed light on the realities of a system that may in fact be too flexible.

  11. Compositional Synthesis of Controllers from Scenario-Based Assume-Guarantee Specifications

    DEFF Research Database (Denmark)

    Greenyer, Joel; Kindler, Ekkart

    2013-01-01

    Modern software-intensive systems often consist of multiple components that interact to fulfill complex functions in sometimes safety-critical situations. During the design, it is crucial to specify the system's requirements formally and to detect inconsistencies as early as possible in order to ...... present, in this paper, a novel assume-guarantee-style compositional synthesis technique for MSD specifications. We provide evaluation results underlining the benefit of our approach and formally justify its correctness....

  12. Assumed genetic effects of low level irradiation on man

    International Nuclear Information System (INIS)

    Dutrillaux, B.

    1976-01-01

    The significance of human genetic pathology is stated and a study is made of the assumed effect of low level ionizing radiations. The theoretical notions thus derived are compared to experimental data which are poor. A quick survey of the literature shows that is has not yet been possible to establish a direct relationship between an increase of exposure and any genetic effect on man. However, this must not lead to conclude on the innoxiousness of radiation but rather shows how such analyses are difficult in as much as the effect investigated is necessarily low [fr

  13. A hybrid choice model with nonlinear utility functions and bounded distributions for latent variables : application to purchase intention decisions of electric cars

    NARCIS (Netherlands)

    Kim, J.; Rasouli, S.; Timmermans, H.J.P.

    2014-01-01

    The hybrid choice model (HCM) provides a powerful framework to account for heterogeneity across decision-makers in terms of different underlying latent attitudes. Typically, effects of the latent attitudes have been represented assuming linear utility functions. In contributing to the further

  14. On the application of cohesive crack modeling in cementitious materials

    DEFF Research Database (Denmark)

    Stang, Henrik; Olesen, John Forbes; Poulsen, Peter Noe

    2007-01-01

    typically for multi scale problems such as crack propagation in fiber reinforced composites. Mortar and concrete, however, are multi-scale materials and the question naturally arises, if bridged crack models in fact are more suitable for concrete and mortar as well? In trying to answer this question a model......Cohesive crack models-in particular the Fictitious Crack Model - are applied routinely in the analysis of crack propagation in concrete and mortar. Bridged crack models-where cohesive stresses are assumed to exist together with a stress singularity at the crack tip-on the other hand, are used...

  15. Wetware, Hardware, or Software Incapacitation: Observational Methods to Determine When Autonomy Should Assume Control

    Science.gov (United States)

    Trujillo, Anna C.; Gregory, Irene M.

    2014-01-01

    Control-theoretic modeling of human operator's dynamic behavior in manual control tasks has a long, rich history. There has been significant work on techniques used to identify the pilot model of a given structure. This research attempts to go beyond pilot identification based on experimental data to develop a predictor of pilot behavior. Two methods for pre-dicting pilot stick input during changing aircraft dynamics and deducing changes in pilot behavior are presented This approach may also have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot. With this ability to detect changes in piloting behavior, the possibility now exists to mediate human adverse behaviors, hardware failures, and software anomalies with autono-my that may ameliorate these undesirable effects. However, appropriate timing of when au-tonomy should assume control is dependent on criticality of actions to safety, sensitivity of methods to accurately detect these adverse changes, and effects of changes in levels of auto-mation of the system as a whole.

  16. Ensemble perception of emotions in autistic and typical children and adolescents

    Directory of Open Access Journals (Sweden)

    Themelis Karaminis

    2017-04-01

    Full Text Available Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a an ‘ensemble’ emotion discrimination task; b a baseline (single-face emotion discrimination task; and c a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average.

  17. Cooperative Problem-Based Learning (CPBL: A Practical PBL Model for a Typical Course

    Directory of Open Access Journals (Sweden)

    Khairiyah Mohd-Yusof

    2011-09-01

    Full Text Available Problem-Based Learning (PBL is an inductive learning approach that uses a realistic problem as the starting point of learning. Unlike in medical education, which is more easily adaptable to PBL, implementing PBL in engineering courses in the traditional semester system set-up is challenging. While PBL is normally implemented in small groups of up to ten students with a dedicated tutor during PBL sessions in medical education, this is not plausible in engineering education because of the high enrolment and large class sizes. In a typical course, implementation of PBL consisting of students in small groups in medium to large classes is more practical. However, this type of implementation is more difficult to monitor, and thus requires good support and guidance in ensuring commitment and accountability of each student towards learning in his/her group. To provide the required support, Cooperative Learning (CL is identified to have the much needed elements to develop the small student groups to functional learning teams. Combining both CL and PBL results in a Cooperative Problem-Based Learning (CPBL model that provides a step by step guide for students to go through the PBL cycle in their teams, according to CL principles. Suitable for implementation in medium to large classes (approximately 40-60 students for one floating facilitator, with small groups consisting of 3-5 students, the CPBL model is designed to develop the students in the whole class into a learning community. This paper provides a detailed description of the CPBL model. A sample implementation in a third year Chemical Engineering course, Process Control and Dynamics, is also described.

  18. A thermodynamic model for aqueous solutions of liquid-like density

    Energy Technology Data Exchange (ETDEWEB)

    Pitzer, K.S.

    1987-06-01

    The paper describes a model for the prediction of the thermodynamic properties of multicomponent aqueous solutions and discusses its applications. The model was initially developed for solutions near room temperature, but has been found to be applicable to aqueous systems up to 300/sup 0/C or slightly higher. A liquid-like density and relatively small compressibility are assumed. A typical application is the prediction of the equilibrium between an aqueous phase (brine) and one or more solid phases (minerals). (ACR)

  19. A hybrid choice model with a nonlinear utility function and bounded distribution for latent variables : application to purchase intention decisions of electric cars

    NARCIS (Netherlands)

    Kim, J.; Rasouli, S.; Timmermans, H.J.P.

    2016-01-01

    The hybrid choice model (HCM) provides a powerful framework to account for heterogeneity across decision-makers in terms of different underlying latent attitudes. Typically, effects of the latent attitudes have been represented assuming linear utility functions. In contributing to the further

  20. Sensitivity of Attitude Determination on the Model Assumed for ISAR Radar Mappings

    Science.gov (United States)

    Lemmens, S.; Krag, H.

    2013-09-01

    Inverse synthetic aperture radars (ISAR) are valuable instrumentations for assessing the state of a large object in low Earth orbit. The images generated by these radars can reach a sufficient quality to be used during launch support or contingency operations, e.g. for confirming the deployment of structures, determining the structural integrity, or analysing the dynamic behaviour of an object. However, the direct interpretation of ISAR images can be a demanding task due to the nature of the range-Doppler space in which these images are produced. Recently, a tool has been developed by the European Space Agency's Space Debris Office to generate radar mappings of a target in orbit. Such mappings are a 3D-model based simulation of how an ideal ISAR image would be generated by a ground based radar under given processing conditions. These radar mappings can be used to support a data interpretation process. E.g. by processing predefined attitude scenarios during an observation sequence and comparing them with actual observations, one can detect non-nominal behaviour. Vice versa, one can also estimate the attitude states of the target by fitting the radar mappings to the observations. It has been demonstrated for the latter use case that a coarse approximation of the target through an 3D-model is already sufficient to derive the attitude information from the generated mappings. The level of detail required for the 3D-model is determined by the process of generating ISAR images, which is based on the theory of scattering bodies. Therefore, a complex surface can return an intrinsically noisy ISAR image. E.g. when many instruments on a satellite are visible to the observer, the ISAR image can suffer from multipath reflections. In this paper, we will further analyse the sensitivity of the attitude fitting algorithms to variations in the dimensions and the level of detail of the underlying 3D model. Moreover, we investigate the ability to estimate the orientations of different

  1. Accurate or Assumed: Visual Learning in Children with ASD

    Science.gov (United States)

    Trembath, David; Vivanti, Giacomo; Iacono, Teresa; Dissanayake, Cheryl

    2015-01-01

    Children with autism spectrum disorder (ASD) are often described as visual learners. We tested this assumption in an experiment in which 25 children with ASD, 19 children with global developmental delay (GDD), and 17 typically developing (TD) children were presented a series of videos via an eye tracker in which an actor instructed them to…

  2. Modeling and simulation of loss of the ultimate heat sink in a typical material testing reactor

    International Nuclear Information System (INIS)

    El-Khatib, Hisham; El-Morshedy, Salah El-Din; Higazy, Maher G.; El-Shazly, Karam

    2013-01-01

    Highlights: ► A thermal–hydraulic model has been developed to simulate loss of the ultimate heat sink in MTR. ► The model involves three coupled sub-models for core, heat exchanger and cooling tower. ► The model is validated against PARET for steady-state and verified by operation data for transients. ► The model is used to simulate the behavior of the reactor under a loss of the ultimate heat sink. ► The model results are analyzed and discussed. -- Abstract: A thermal–hydraulic model has been developed to simulate loss of the ultimate heat sink in a typical material testing reactor (MTR). The model involves three interactively coupled sub-models for reactor core, heat exchanger and cooling tower. The model is validated against PARET code for steady-state operation and verified by the reactor operation records for transients. Then, the model is used to simulate the thermal–hydraulic behavior of the reactor under a loss of the ultimate heat sink event. The simulation is performed for two operation regimes: regime I representing 11 MW power and three cooling tower cells operated, and regime II representing 22 MW power and six cooling tower cells operated. In regime I, the simulation is performed for 1, 2 and 3 cooling tower cells failed while in regime II, it is performed for 1, 2, 3, 4, 5 and 6 cooling tower cells failed. The simulation is performed under protected conditions where the safety action called power reduction is triggered by reactor protection system to decrease the reactor power by 20% when the coolant inlet temperature to the core reaches 43 °C and scram is triggered if the core inlet temperature reaches 44 °C. The model results are analyzed and discussed.

  3. Testing typicality in multiverse cosmology

    Science.gov (United States)

    Azhar, Feraz

    2015-05-01

    In extracting predictions from theories that describe a multiverse, we face the difficulty that we must assess probability distributions over possible observations prescribed not just by an underlying theory, but by a theory together with a conditionalization scheme that allows for (anthropic) selection effects. This means we usually need to compare distributions that are consistent with a broad range of possible observations with actual experimental data. One controversial means of making this comparison is by invoking the "principle of mediocrity": that is, the principle that we are typical of the reference class implicit in the conjunction of the theory and the conditionalization scheme. In this paper, we quantitatively assess the principle of mediocrity in a range of cosmological settings, employing "xerographic distributions" to impose a variety of assumptions regarding typicality. We find that for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations. If, however, one allows both the underlying theory and the assumption of typicality to vary, then the assumption of typicality does not always provide the highest likelihoods. Interpreted from a Bayesian perspective, these results support the claim that when one has the freedom to consider different combinations of theories and xerographic distributions (or different "frameworks"), one should favor the framework that has the highest posterior probability; and then from this framework one can infer, in particular, how typical we are. In this way, the invocation of the principle of mediocrity is more questionable than has been recently claimed.

  4. The spatial distribution of flocking foragers : disentangling the effects of food availability, interference and conspecific attraction by means of spatial autoregressive modeling

    NARCIS (Netherlands)

    Folmer, Eelke O.; Olff, Han; Piersma, Theunis; Robinson, Rob

    Patch choice of foraging animals is typically assumed to depend positively on food availability and negatively on interference while benefits of the co-occurrence of conspecifics tend to be ignored. In this paper we integrate a classical functional response model based on resource availability and

  5. Monte Carlo based radial shield design of typical PWR reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.

    2017-04-15

    This paper presents the radiation shielding model of a typical PWR (CNPP-II) at Chashma, Pakistan. The model was developed using Monte Carlo N Particle code [2], equipped with ENDF/B-VI continuous energy cross section libraries. This model was applied to calculate the neutron and gamma flux and dose rates in the radial direction at core mid plane. The simulated results were compared with the reference results of Shanghai Nuclear Engineering Research and Design Institute (SNERDI).

  6. Improved Algorithm of SCS-CN Model Parameters in Typical Inland River Basin in Central Asia

    Science.gov (United States)

    Wang, Jin J.; Ding, Jian L.; Zhang, Zhe; Chen, Wen Q.

    2017-02-01

    Rainfall-runoff relationship is the most important factor for hydrological structures, social and economic development on the background of global warmer, especially in arid regions. The aim of this paper is find the suitable method to simulate the runoff in arid area. The Soil Conservation Service Curve Number (SCS-CN) is the most popular and widely applied model for direct runoff estimation. In this paper, we will focus on Wen-quan Basin in source regions of Boertala River. It is a typical valley of inland in Central Asia. First time to use the 16m resolution remote sensing image about high-definition earth observation satellite “Gaofen-1” to provide a high degree accuracy data for land use classification determine the curve number. Use surface temperature/vegetation index (TS/VI) construct 2D scatter plot combine with the soil moisture absorption balance principle calculate the moisture-holding capacity of soil. Using original and parameter algorithm improved SCS-CN model respectively to simulation the runoff. The simulation results show that the improved model is better than original model. Both of them in calibration and validation periods Nash-Sutcliffe efficiency were 0.79, 0.71 and 0.66,038. And relative error were3%, 12% and 17%, 27%. It shows that the simulation accuracy should be further improved and using remote sensing information technology to improve the basic geographic data for the hydrological model has the following advantages: 1) Remote sensing data having a planar characteristic, comprehensive and representative. 2) To get around the bottleneck about lack of data, provide reference to simulation the runoff in similar basin conditions and data-lacking regions.

  7. Modeling individual differences in text reading fluency: a different pattern of predictors for typically developing and dyslexic readers

    Directory of Open Access Journals (Sweden)

    Pierluigi eZoccolotti

    2014-11-01

    Full Text Available This study was aimed at predicting individual differences in text reading fluency. The basic proposal included two factors, i.e., the ability to decode letter strings (measured by discrete pseudo-word reading and integration of the various sub-components involved in reading (measured by Rapid Automatized Naming, RAN. Subsequently, a third factor was added to the model, i.e., naming of discrete digits. In order to use homogeneous measures, all contributing variables considered the entire processing of the item, including pronunciation time. The model, which was based on commonality analysis, was applied to data from a group of 43 typically developing readers (11- to 13-year-olds and a group of 25 chronologically matched dyslexic children. In typically developing readers, both orthographic decoding and integration of reading sub-components contributed significantly to the overall prediction of text reading fluency. The model prediction was higher (from ca. 37% to 52% of the explained variance when we included the naming of discrete digits variable, which had a suppressive effect on pseudo-word reading. In the dyslexic readers, the variance explained by the two-factor model was high (69% and did not change when the third factor was added. The lack of a suppression effect was likely due to the prominent individual differences in poor orthographic decoding of the dyslexic children. Analyses on data from both groups of children were replicated by using patches of colours as stimuli (both in the RAN task and in the discrete naming task obtaining similar results. We conclude that it is possible to predict much of the variance in text-reading fluency using basic processes, such as orthographic decoding and integration of reading sub-components, even without taking into consideration higher-order linguistic factors such as lexical, semantic and contextual abilities. The approach validity of using proximal vs distal causes to predict reading fluency is

  8. How Public High School Students Assume Cooperative Roles to Develop Their EFL Speaking Skills

    Directory of Open Access Journals (Sweden)

    Julie Natalie Parra Espinel

    2010-12-01

    Full Text Available This study describes an investigation we carried out in order to identify how the specific roles that 7th grade public school students assumed when they worked cooperatively were related to their development of speaking skills in English. Data were gathered through interviews, field notes, students’ reflections and audio recordings. The findings revealed that students who were involved in cooperative activities chose and assumed roles taking into account preferences, skills and personality traits. In the same manner, when learners worked together, their roles were affected by each other and they put into practice some social strategies with the purpose of supporting their embryonic speaking development.

  9. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation. A unified elasto-viscoplastic constitutive model

    International Nuclear Information System (INIS)

    Chen, Ming-Song; Lin, Y.C.; Li, Kuo-Kuo; Chen, Jian

    2016-01-01

    In authors' previous work (Chen et al. in Appl Phys A. doi:10.1007/s00339-016-0371-6, 2016), the nonlinear unloading behavior of a typical Ni-based superalloy was investigated by hot compressive experiments with intermediate unloading-reloading cycles. The characters of unloading curves were discussed in detail, and a new elasto-viscoplastic constitutive model was proposed to describe the nonlinear unloading behavior of the studied Ni-based superalloy. Still, the functional relationships between the deformation temperature, strain rate, pre-strain and the parameters of the proposed constitutive model need to be established. In this study, the effects of deformation temperature, strain rate and pre-strain on the parameters of the new constitutive model proposed in authors' previous work (Chen et al. 2016) are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate and pre-strain. (orig.)

  10. Prediction and typicality in multiverse cosmology

    International Nuclear Information System (INIS)

    Azhar, Feraz

    2014-01-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios. (paper)

  11. Prediction and typicality in multiverse cosmology

    Science.gov (United States)

    Azhar, Feraz

    2014-02-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.

  12. The Ability to Assume the Upright Position in Blind and Sighted Children.

    Science.gov (United States)

    Gipsman, Sandra Curtis

    To investigate the ability of 48 blind and partially sighted children (8 to 10 and 12 to 14 years old) to assume the upright position, Ss were given six trials in which they were requested to move themselves from a tilted starting position in a specially constructed chair to an upright position. No significant differences were found between three…

  13. Typicality and reasoning fallacies.

    Science.gov (United States)

    Shafir, E B; Smith, E E; Osherson, D N

    1990-05-01

    The work of Tversky and Kahneman on intuitive probability judgment leads to the following prediction: The judged probability that an instance belongs to a category is an increasing function of the typicality of the instance in the category. To test this prediction, subjects in Experiment 1 read a description of a person (e.g., "Linda is 31, bright, ... outspoken") followed by a category. Some subjects rated how typical the person was of the category, while others rated the probability that the person belonged to that category. For categories like bank teller and feminist bank teller: (1) subjects rated the person as more typical of the conjunctive category (a conjunction effect); (2) subjects rated it more probable that the person belonged to the conjunctive category (a conjunction fallacy); and (3) the magnitudes of the conjunction effect and fallacy were highly correlated. Experiment 2 documents an inclusion fallacy, wherein subjects judge, for example, "All bank tellers are conservative" to be more probable than "All feminist bank tellers are conservative." In Experiment 3, results parallel to those of Experiment 1 were obtained with respect to the inclusion fallacy.

  14. Typicality and misinformation: Two sources of distortion

    Directory of Open Access Journals (Sweden)

    Malen Migueles

    2008-01-01

    Full Text Available This study examined the effect of two sources of memory error: exposure to post-event information and extracting typical contents from schemata. Participants were shown a video of a bank robbery and presented with highand low-typicality misinformation extracted from two normative studies. The misleading suggestions consisted of either changes in the original video information or additions of completely new contents. In the subsequent recognition task the post-event misinformation produced memory impairment. The participants used the underlying schema of the event to extract high-typicality information which had become integrated with episodic information, thus giving rise to more hits and false alarms for these items. However, the effect of exposure to misinformation was greater on low-typicality items. There were no differences between changed or added information, but there were more false alarms when a low-typicality item was changed to a high-typicality item

  15. Typical NRC inspection procedures for model plant

    International Nuclear Information System (INIS)

    Blaylock, J.

    1984-01-01

    A summary of NRC inspection procedures for a model LEU fuel fabrication plant is presented. Procedures and methods for combining inventory data, seals, measurement techniques, and statistical analysis are emphasized

  16. A mathematical model for the performance assessment of engineering barriers of a typical near surface radioactive waste disposal facility

    Energy Technology Data Exchange (ETDEWEB)

    Antonio, Raphaela N.; Rotunno Filho, Otto C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Hidrologia e Estudos do Meio Ambiente]. E-mail: otto@hidro.ufrj.br; Ruperti Junior, Nerbe J.; Lavalle Filho, Paulo F. Heilbron [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)]. E-mail: nruperti@cnen.gov.br

    2005-07-01

    This work proposes a mathematical model for the performance assessment of a typical radioactive waste disposal facility based on the consideration of a multiple barrier concept. The Generalized Integral Transform Technique is employed to solve the Advection-Dispersion mass transfer equation under the assumption of saturated one-dimensional flow, to obtain solute concentrations at given times and locations within the medium. A test-case is chosen in order to illustrate the performance assessment of several configurations of a multi barrier system adopted for the containment of sand contaminated with Ra-226 within a trench. (author)

  17. A mathematical model for the performance assessment of engineering barriers of a typical near surface radioactive waste disposal facility

    International Nuclear Information System (INIS)

    Antonio, Raphaela N.; Rotunno Filho, Otto C.

    2005-01-01

    This work proposes a mathematical model for the performance assessment of a typical radioactive waste disposal facility based on the consideration of a multiple barrier concept. The Generalized Integral Transform Technique is employed to solve the Advection-Dispersion mass transfer equation under the assumption of saturated one-dimensional flow, to obtain solute concentrations at given times and locations within the medium. A test-case is chosen in order to illustrate the performance assessment of several configurations of a multi barrier system adopted for the containment of sand contaminated with Ra-226 within a trench. (author)

  18. Typical Complexity Numbers

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Typical Complexity Numbers. Say. 1000 tones,; 100 Users,; Transmission every 10 msec. Full Crosstalk cancellation would require. Full cancellation requires a matrix multiplication of order 100*100 for all the tones. 1000*100*100*100 operations every second for the ...

  19. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  20. Computer simulation of the martensite transformation in a model two-dimensional body

    International Nuclear Information System (INIS)

    Chen, S.; Khachaturyan, A.G.; Morris, J.W. Jr.

    1979-05-01

    An analytical model of a martensitic transformation in an idealized body is constructed and used to carry out a computer simulation of the transformation in a pseudo-two-dimensional crystal. The reaction is assumed to proceed through the sequential transformation of elementary volumes (elementary martensitic particles, EMP) via the Bain strain. The elastic interaction between these volumes is computed and the transformation path chosen so as to minimize the total free energy. The model transformation shows interesting qualitative correspondencies with the known features of martensitic transformations in typical solids

  1. Computer simulation of the martensite transformation in a model two-dimensional body

    International Nuclear Information System (INIS)

    Chen, S.; Khachaturyan, A.G.; Morris, J.W. Jr.

    1979-06-01

    An analytical model of a martensitic transformation in an idealized body is constructed and used to carry out a computer simulation of the transformation in a pseudo-two-dimensional crystal. The reaction is assumed to proceed through the sequential transformation of elementary volumes (elementary martensitic particles, EMP) via the Bain strain. The elastic interaction between these volumes is computed and the transformation path chosen so as to minimize the total free energy. The model transformation shows interesting qualitative correspondencies with the known features of martensitic transformations in typical solids

  2. Research on Soft Reduction Amount Distribution to Eliminate Typical Inter-dendritic Crack in Continuous Casting Slab of X70 Pipeline Steel by Numerical Model

    Science.gov (United States)

    Liu, Ke; Wang, Chang; Liu, Guo-liang; Ding, Ning; Sun, Qi-song; Tian, Zhi-hong

    2017-04-01

    To investigate the formation of one kind of typical inter-dendritic crack around triple point region in continuous casting(CC) slab during the operation of soft reduction, fully coupled 3D thermo-mechanical finite element models was developed, also plant trials were carried out in a domestic continuous casting machine. Three possible types of soft reduction amount distribution (SRAD) in the soft reduction region were analyzed. The relationship between the typical inter-dendritic cracks and soft reduction conditions is presented and demonstrated in production practice. Considering the critical strain of internal crack formation, a critical tolerance for the soft reduction amount distribution and related casing parameters have been proposed for better contribution of soft reduction to the internal quality of slabs. The typical inter-dendritic crack around the triple point region had been eliminated effectively through the application of proposed suggestions for continuous casting of X70 pipeline steel in industrial practice.

  3. Application of stability enhancing minimum interfacial pressure force model for MARS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Jae; Lim, Ho Gon; Kim, Kyung Doo; Ha, Kwi Seok

    2001-04-01

    For thermal-hydraulic modeling of two-phase flow systems, two-fluid model, which assumes that the pressures of liquid, vapor and interface are identical, a so-called single-pressure model, is commonly used in codes for nuclear reactor safety analyses. Typical two-phase model with single pressure assumption possesses complex characteristics that result in system being ill-posed. As a result, typical single pressure model may cause the unbounded growth of instabilities. In order to overcome the ill-posedness of single-pressure two-fluid model, a hyperbolic equation system has been developed by introducing an interfacial pressure force into single pressure two-fluid model. The potential impact of the present model on the stability of finite difference solution has been examined by Von-Neumann stability analysis. The obvious improvement in numerical stability has been found when a semi-implicit time advancement scheme is used. Numerical experiments using the pilot code were also performed for the conceptual problems. It was found that the result was consistent with numerical stability test. The new model was implemented to MARS using Two-step approach. Through the conceptual stability test problems and benchmark problems, the applicability of the new model was verified.

  4. Application of stability enhancing minimum interfacial pressure force model for MARS

    International Nuclear Information System (INIS)

    Lee, Won Jae; Lim, Ho Gon; Kim, Kyung Doo; Ha, Kwi Seok

    2001-04-01

    For thermal-hydraulic modeling of two-phase flow systems, two-fluid model, which assumes that the pressures of liquid, vapor and interface are identical, a so-called single-pressure model, is commonly used in codes for nuclear reactor safety analyses. Typical two-phase model with single pressure assumption possesses complex characteristics that result in system being ill-posed. As a result, typical single pressure model may cause the unbounded growth of instabilities. In order to overcome the ill-posedness of single-pressure two-fluid model, a hyperbolic equation system has been developed by introducing an interfacial pressure force into single pressure two-fluid model. The potential impact of the present model on the stability of finite difference solution has been examined by Von-Neumann stability analysis. The obvious improvement in numerical stability has been found when a semi-implicit time advancement scheme is used. Numerical experiments using the pilot code were also performed for the conceptual problems. It was found that the result was consistent with numerical stability test. The new model was implemented to MARS using Two-step approach. Through the conceptual stability test problems and benchmark problems, the applicability of the new model was verified

  5. A generalized window energy rating system for typical office buildings

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Cheng; Chen, Tingyao; Yang, Hongxing; Chung, Tse-ming [Research Center for Building Environmental Engineering, Department of Building Services Engineering, The Hong Kong Polytechnic University, Hong Kong (China)

    2010-07-15

    Detailed computer simulation programs require lengthy inputs, and cannot directly provide an insight to relationship between the window energy performance and the key window design parameters. Hence, several window energy rating systems (WERS) for residential houses and small buildings have been developed in different countries. Many studies showed that utilization of daylight through elaborate design and operation of windows leads to significant energy savings in both cooling and lighting in office buildings. However, the current WERSs do not consider daylighting effect, while most of daylighting analyses do not take into account the influence of convective and infiltration heat gains. Therefore, a generalized WERS for typical office buildings has been presented, which takes all primary influence factors into account. The model includes embodied and operation energy uses and savings by a window to fully reflect interactions among the influence parameters. Reference locations selected for artificial lighting and glare control in the current common simulation practice may cause uncompromised conflicts, which could result in over- or under-estimated energy performance. Widely used computer programs, DOE2 and ADELINE, for hourly daylighting and cooling simulations have their own weaknesses, which may result in unrealistic or inaccurate results. An approach is also presented for taking the advantages of the both programs and avoiding their weaknesses. The model and approach have been applied to a typical office building of Hong Kong as an example to demonstrate how a WERS in a particular location can be established and how well the model can work. The energy effect of window properties, window-to-wall ratio (WWR), building orientation and lighting control strategies have been analyzed, and can be indicated by the localized WERS. An application example also demonstrates that the algebraic WERS derived from simulation results can be easily used for the optimal design of

  6. A mixed-binomial model for Likert-type personality measures.

    Science.gov (United States)

    Allik, Jüri

    2014-01-01

    Personality measurement is based on the idea that values on an unobservable latent variable determine the distribution of answers on a manifest response scale. Typically, it is assumed in the Item Response Theory (IRT) that latent variables are related to the observed responses through continuous normal or logistic functions, determining the probability with which one of the ordered response alternatives on a Likert-scale item is chosen. Based on an analysis of 1731 self- and other-rated responses on the 240 NEO PI-3 questionnaire items, it was proposed that a viable alternative is a finite number of latent events which are related to manifest responses through a binomial function which has only one parameter-the probability with which a given statement is approved. For the majority of items, the best fit was obtained with a mixed-binomial distribution, which assumes two different subpopulations who endorse items with two different probabilities. It was shown that the fit of the binomial IRT model can be improved by assuming that about 10% of random noise is contained in the answers and by taking into account response biases toward one of the response categories. It was concluded that the binomial response model for the measurement of personality traits may be a workable alternative to the more habitual normal and logistic IRT models.

  7. Regional LLRW [low-level radioactive waste] processing alternatives applying the DOE REGINALT systems analysis model

    International Nuclear Information System (INIS)

    Beers, G.H.

    1987-01-01

    The DOE Low-Level Waste Management Progam has developed a computer-based decision support system of models that may be used by nonprogrammers to evaluate a comprehensive approach to commercial low-level radioactive waste (LLRW) management. REGINALT (Regional Waste Management Alternatives Analysis Model) implementation will be described as the model is applied to a hypothetical regional compact for the purpose of examining the technical and economic potential of two waste processing alternaties. Using waste from a typical regional compact, two specific regional waste processing centers will be compared for feasibility. Example 1 will assume will assume that a regional supercompaction facility is being developed for the region. Example 2 will assume that a regional facility with both supercompation and incineration is specified. Both examples will include identical disposal facilities, except that capacity may differ due to variation in volume reduction achieved. The two examples will be compared with regard to volume reduction achieved, estimated occupational exposure for the processing facilities, and life cylcle costs per generated unit waste. A base case will also illustrate current disposal practices. The results of the comparisons will be evaluated, and other steps, if necessary, for additional decision support will be identified

  8. Using item response theory to investigate the structure of anticipated affect: do self-reports about future affective reactions conform to typical or maximal models?

    OpenAIRE

    Zampetakis, Leonidas A.; Lerakis, Manolis; Kafetsios, Konstantinos; Moustakis, Vassilis

    2015-01-01

    In the present research we used item response theory (IRT) to examine whether effective predictions (anticipated affect) conforms to a typical (i.e., what people usually do) or a maximal behavior process (i.e., what people can do). The former, correspond to non-monotonic ideal point IRT models whereas the latter correspond to monotonic dominance IRT models. A convenience, cross-sectional student sample (N=1624) was used. Participants were asked to report on anticipated positive and negative a...

  9. Concept typicality responses in the semantic memory network.

    Science.gov (United States)

    Santi, Andrea; Raposo, Ana; Frade, Sofia; Marques, J Frederico

    2016-12-01

    For decades concept typicality has been recognized as critical to structuring conceptual knowledge, but only recently has typicality been applied in better understanding the processes engaged by the neurological network underlying semantic memory. This previous work has focused on one region within the network - the Anterior Temporal Lobe (ATL). The ATL responds negatively to concept typicality (i.e., the more atypical the item, the greater the activation in the ATL). To better understand the role of typicality in the entire network, we ran an fMRI study using a category verification task in which concept typicality was manipulated parametrically. We argue that typicality is relevant to both amodal feature integration centers as well as category-specific regions. Both the Inferior Frontal Gyrus (IFG) and ATL demonstrated a negative correlation with typicality, whereas inferior parietal regions showed positive effects. We interpret this in light of functional theories of these regions. Interactions between category and typicality were not observed in regions classically recognized as category-specific, thus, providing an argument against category specific regions, at least with fMRI. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Hydrogen deflagration simulations under typical containment conditions for nuclear safety

    Energy Technology Data Exchange (ETDEWEB)

    Yanez, J., E-mail: jorge.yanez@kit.edu [Institute for Energy and Nuclear Technology, Karlsruhe Institute of Technology, Kaiserstrasse 12, 76131 Karlsruhe (Germany); Kotchourko, A.; Lelyakin, A. [Institute for Energy and Nuclear Technology, Karlsruhe Institute of Technology, Kaiserstrasse 12, 76131 Karlsruhe (Germany)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer Lean H{sub 2}-air combustion experiments highly relevant to typical NPP simulated. Black-Right-Pointing-Pointer Analyzed effect of temperature, concentration of H{sub 2}, and steam concentration. Black-Right-Pointing-Pointer Similar conditions and H{sub 2} concentration yielded different combustion regimes. Black-Right-Pointing-Pointer Flame instabilities (FIs) were the effect driving divergences. Black-Right-Pointing-Pointer Model developed for acoustic FI in simulations. Agreement experiments obtained. - Abstract: This paper presents the modeling of low-concentration hydrogen deflagrations performed with the recently developed KYLCOM model specially created to perform calculations in large scale domains. Three experiments carried out in THAI facility (performed in the frames of international OECD THAI experimental program) were selected to be analyzed. The tests allow studying lean mixture hydrogen combustion at normal ambient, elevated temperature and superheated and saturated conditions. The experimental conditions considered together with the facility size and shape grant a high relevance degree to the typical NPP containment conditions. The results of the simulations were thoroughly compared with the experimental data, and the comparison was supplemented by the analysis of the combustion regimes taking place in the considered tests. Results of the analysis demonstrated that despite the comparatively small difference in mixture properties, three different combustion regimes can be definitely identified. The simulations of one of the cases required of the modeling of the acoustic-parametric instability which was carefully undertaken.

  11. Effect of heterogeneity and assumed mode of inheritance on lod scores.

    Science.gov (United States)

    Durner, M; Greenberg, D A

    1992-02-01

    Heterogeneity is a major factor in many common, complex diseases and can confound linkage analysis. Using computer-simulated heterogeneous data we tested what effect unlinked families have on a linkage analysis when heterogeneity is not taken into account. We created 60 data sets of 40 nuclear families each with different proportions of linked and unlinked families and with different modes of inheritance. The ascertainment probability was 0.05, the disease had a penetrance of 0.6, and the recombination fraction for the linked families was zero. For the analysis we used a variety of assumed modes of inheritance and penetrances. Under these conditions we looked at the effect of the unlinked families on the lod score, the evaluation of the mode of inheritance, and the estimate of penetrance and of the recombination fraction in the linked families. 1. When the analysis was done under the correct mode of inheritance for the linked families, we found that the mode of inheritance of the unlinked families had minimal influence on the highest maximum lod score (MMLS) (i.e., we maximized the maximum lod score with respect to penetrance). Adding sporadic families decreased the MMLS less than adding recessive or dominant unlinked families. 2. The mixtures of dominant linked families with unlinked families always led to a higher MMLS when analyzed under the correct (dominant) mode of inheritance than when analyzed under the incorrect mode of inheritance. In the mixtures with recessive linked families, assuming the correct mode of inheritance generally led to a higher MMLS, but we observed broad variation.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. ΛCDM model with dissipative nonextensive viscous dark matter

    Science.gov (United States)

    Gimenes, H. S.; Viswanathan, G. M.; Silva, R.

    2018-03-01

    Many models in cosmology typically assume the standard bulk viscosity. We study an alternative interpretation for the origin of the bulk viscosity. Using nonadditive statistics proposed by Tsallis, we propose a bulk viscosity component that can only exist by a nonextensive effect through the nonextensive/dissipative correspondence (NexDC). In this paper, we consider a ΛCDM model for a flat universe with a dissipative nonextensive viscous dark matter component, following the Eckart theory of bulk viscosity, without any perturbative approach. In order to analyze cosmological constraints, we use one of the most recent observations of Type Ia Supernova, baryon acoustic oscillations and cosmic microwave background data.

  13. Prospects for carbon capture and sequestration technologies assuming their technological learning

    International Nuclear Information System (INIS)

    Riahi, Keywan; Rubin, Edward S.; Schrattenholzer, Leo

    2004-01-01

    This paper analyzes potentials of carbon capture and sequestration technologies (CCS) in a set of long-term energy-economic-environmental scenarios based on alternative assumptions for technological progress of CCS. In order to get a reasonable guide to future technological progress in managing CO 2 emissions, we review past experience in controlling sulfur dioxide emissions (SO 2 ) from power plants. By doing so, we quantify a 'learning curve' for CCS, which describes the relationship between the improvement of costs due to accumulation of experience in CCS construction. We incorporate the learning curve into the energy modeling framework MESSAGE-MACRO and develop greenhouse gas emissions scenarios of economic, demographic, and energy demand development, where alternative policy cases lead to the stabilization of atmospheric CO 2 concentrations at 550 parts per million by volume (ppmv) by the end of the 21st century. Due to the assumed technological learning, costs of the emissions reduction for CCS drop rapidly and in parallel with the massive introduction of CCS on the global scale. Compared to scenarios based on static cost assumptions for CCS, the contribution of carbon sequestration is about 50 percent higher in the case of learning resulting in cumulative sequestration of CO 2 ranging from 150 to 250 billion (10 9 ) tons carbon during the 21st century. The results illustrate that carbon capture and sequestration is one of the obvious priority candidates for long-term technology policies and enhanced R and D efforts to hedge against the risk associated with high environmental impacts of climate change

  14. Cost study on waste management at three model Canadian uranium mines

    International Nuclear Information System (INIS)

    1984-03-01

    A waste management cost study was initiated to determine the capital and operating costs of three different uranium waste management systems which incorporate current technologies being used in Canadian uranium mining operations. Cost estimates were to be done to a thirty percent level of accuracy and were to include all waste management related costs of a uranium ore processing facility. Each model is based on an annual uranium production of 1,923,000 kg U (5,000,000 lbs U 3 O 8 ) with a total operating life of 20 years for the facility. The three models, A, B, and C, are based on three different uranium ore grades, 0.10 percent U 3 O 8 , 0.475 percent U 3 O 8 and 1.5 percent U 3 O 8 respectively. Yellowcake production is assumed to start in January 1984. Model A is based on a conceptual 7,180 tonne per day uranium ore processing facility and waste management system typical of uranium operations in the Elliot Lake area of northern Ontario with an established infrastructure. Model B is a 1.512 tonne per day operation based on a remote uranium operation typical of the Athabasca Basin properties in northern Saskatchewan. Model C is a 466 tonne per day operation processing a high-grade uranium ore containing arsenic and heavy metal concentrations typical of some northern Saskatchewan deposits

  15. Effects of temperature and mass conservation on the typical chemical sequences of hydrogen oxidation

    Science.gov (United States)

    Nicholson, Schuyler B.; Alaghemandi, Mohammad; Green, Jason R.

    2018-01-01

    Macroscopic properties of reacting mixtures are necessary to design synthetic strategies, determine yield, and improve the energy and atom efficiency of many chemical processes. The set of time-ordered sequences of chemical species are one representation of the evolution from reactants to products. However, only a fraction of the possible sequences is typical, having the majority of the joint probability and characterizing the succession of chemical nonequilibrium states. Here, we extend a variational measure of typicality and apply it to atomistic simulations of a model for hydrogen oxidation over a range of temperatures. We demonstrate an information-theoretic methodology to identify typical sequences under the constraints of mass conservation. Including these constraints leads to an improved ability to learn the chemical sequence mechanism from experimentally accessible data. From these typical sequences, we show that two quantities defining the variational typical set of sequences—the joint entropy rate and the topological entropy rate—increase linearly with temperature. These results suggest that, away from explosion limits, data over a narrow range of thermodynamic parameters could be sufficient to extrapolate these typical features of combustion chemistry to other conditions.

  16. Emotion, gender, and gender typical identity in autobiographical memory.

    Science.gov (United States)

    Grysman, Azriel; Merrill, Natalie; Fivush, Robyn

    2017-03-01

    Gender differences in the emotional intensity and content of autobiographical memory (AM) are inconsistent across studies, and may be influenced as much by gender identity as by categorical gender. To explore this question, data were collected from 196 participants (age 18-40), split evenly between men and women. Participants narrated four memories, a neutral event, high point event, low point event, and self-defining memory, completed ratings of emotional intensity for each event, and completed four measures of gender typical identity. For self-reported emotional intensity, gender differences in AM were mediated by identification with stereotypical feminine gender norms. For narrative use of affect terms, both gender and gender typical identity predicted affective expression. The results confirm contextual models of gender identity (e.g., Diamond, 2012 . The desire disorder in research on sexual orientation in women: Contributions of dynamical systems theory. Archives of Sexual Behavior, 41, 73-83) and underscore the dynamic interplay between gender and gender identity in the emotional expression of autobiographical memories.

  17. Typical horticultural products between tradition and innovation

    Directory of Open Access Journals (Sweden)

    Innocenza Chessa

    Full Text Available Recent EU and National policies for agriculture and rural development are mainly focused to foster the production of high quality products as a result of the increasing demand of food safety, typical foods and traditional processing methods. Another word very often used to describe foods in these days is “typicality” which pools together the concepts of “food connected with a specific place”, “historical memory and tradition” and “culture”. The importance for the EU and the National administrations of the above mentioned kind of food is demonstrated, among other things, by the high number of the PDO, PGI and TSG certificated products in Italy. In this period of global markets and economical crisis farmers are realizing how “typical products” can be an opportunity to maintain their market share and to improve the economy of local areas. At the same time, new tools and strategy are needed to reach these goals. A lack of knowledge has being recognized also on how new technologies and results coming from recent research can help in exploiting traditional product and in maintaining the biodiversity. Taking into account the great variety and richness of typical products, landscapes and biodiversity, this report will describe and analyze the relationships among typicality, innovation and research in horticulture. At the beginning “typicality” and “innovation” will be defined also through some statistical features, which ranks Italy at the first place in terms of number of typical labelled products, then will be highlighted how typical products of high quality and connected with the tradition and culture of specific production areas are in a strict relationship with the value of agro-biodiversity. Several different examples will be used to explain different successful methods and/or strategies used to exploit and foster typical Italian vegetables, fruits and flowers. Finally, as a conclusion, since it is thought that

  18. Narrative versus style: Effect of genre-typical events versus genre-typical filmic realizations on film viewers’ genre recognition

    NARCIS (Netherlands)

    Visch, V.; Tan, E.

    2008-01-01

    This study investigated whether film viewers recognize four basic genres (comic, drama, action and nonfiction) on the basis of genre-typical event cues or of genre-typical filmic realization cues of events. Event cues are similar to the narrative content of a film sequence, while filmic realization

  19. Models of epidemics: when contact repetition and clustering should be included

    Directory of Open Access Journals (Sweden)

    Scholz Roland W

    2009-06-01

    Full Text Available Abstract Background The spread of infectious disease is determined by biological factors, e.g. the duration of the infectious period, and social factors, e.g. the arrangement of potentially contagious contacts. Repetitiveness and clustering of contacts are known to be relevant factors influencing the transmission of droplet or contact transmitted diseases. However, we do not yet completely know under what conditions repetitiveness and clustering should be included for realistically modelling disease spread. Methods We compare two different types of individual-based models: One assumes random mixing without repetition of contacts, whereas the other assumes that the same contacts repeat day-by-day. The latter exists in two variants, with and without clustering. We systematically test and compare how the total size of an outbreak differs between these model types depending on the key parameters transmission probability, number of contacts per day, duration of the infectious period, different levels of clustering and varying proportions of repetitive contacts. Results The simulation runs under different parameter constellations provide the following results: The difference between both model types is highest for low numbers of contacts per day and low transmission probabilities. The number of contacts and the transmission probability have a higher influence on this difference than the duration of the infectious period. Even when only minor parts of the daily contacts are repetitive and clustered can there be relevant differences compared to a purely random mixing model. Conclusion We show that random mixing models provide acceptable estimates of the total outbreak size if the number of contacts per day is high or if the per-contact transmission probability is high, as seen in typical childhood diseases such as measles. In the case of very short infectious periods, for instance, as in Norovirus, models assuming repeating contacts will also behave

  20. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    Directory of Open Access Journals (Sweden)

    S. N. Naikwad

    2009-01-01

    Full Text Available A focused time lagged recurrent neural network (FTLR NN with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.

  1. Mass transport in fracture media: impact of the random function model assumed for fractures conductivity

    International Nuclear Information System (INIS)

    Capilla, J. E.; Rodrigo, J.; Gomez Hernandez, J. J.

    2003-01-01

    Characterizing the uncertainty of flow and mass transport models requires the definition of stochastic models to describe hydrodynamic parameters. Porosity and hydraulic conductivity (K) are two of these parameters that exhibit a high degree of spatial variability. K is usually the parameter whose variability influence to a more extended degree solutes movement. In fracture media, it is critical to properly characterize K in the most altered zones where flow and solutes migration tends to be concentrated. However, K measurements use to be scarce and sparse. This fact calls to consider stochastic models that allow quantifying the uncertainty of flow and mass transport predictions. This paper presents a convective transport problem solved in a 3D block of fractured crystalline rock. the case study is defined based on data from a real geological formation. As the scarcity of K data in fractures does not allow supporting classical multi Gaussian assumptions for K in fractures, the non multi Gaussian hypothesis has been explored, comparing mass transport results for alternative Gaussian and non-Gaussian assumptions. The latter hypothesis allows reproducing high spatial connectivity for extreme values of K. This feature is present in nature, might lead to reproduce faster solute pathways, and therefore should be modeled in order to obtain reasonably safe prediction of contaminants migration in a geological formation. The results obtained for the two alternative hypotheses show a remarkable impact of the K random function model in solutes movement. (Author) 9 refs

  2. Representation and Incorporation of Close Others' Responses: The RICOR Model of Social Influence.

    Science.gov (United States)

    Smith, Eliot R; Mackie, Diane M

    2015-08-03

    We propose a new model of social influence, which can occur spontaneously and in the absence of typically assumed motives. We assume that perceivers routinely construct representations of other people's experiences and responses (beliefs, attitudes, emotions, and behaviors), when observing others' responses or simulating the responses of unobserved others. Like representations made accessible by priming, these representations may then influence the process that generates perceivers' own responses, without intention or awareness, especially when there is a strong social connection to the other. We describe evidence for the basic properties and important moderators of this process, which distinguish it from other mechanisms such as informational, normative, or social identity influence. The model offers new perspectives on the role of others' values in producing cultural differences, the persistence and power of stereotypes, the adaptive reasons for being influenced by others' responses, and the impact of others' views about the self. © 2015 by the Society for Personality and Social Psychology, Inc.

  3. A Typical Verification Challenge for the GRID

    NARCIS (Netherlands)

    van de Pol, Jan Cornelis; Bal, H. E.; Brim, L.; Leucker, M.

    2008-01-01

    A typical verification challenge for the GRID community is presented. The concrete challenge is to implement a simple recursive algorithm for finding the strongly connected components in a graph. The graph is typically stored in the collective memory of a number of computers, so a distributed

  4. Accurate or assumed: visual learning in children with ASD.

    Science.gov (United States)

    Trembath, David; Vivanti, Giacomo; Iacono, Teresa; Dissanayake, Cheryl

    2015-10-01

    Children with autism spectrum disorder (ASD) are often described as visual learners. We tested this assumption in an experiment in which 25 children with ASD, 19 children with global developmental delay (GDD), and 17 typically developing (TD) children were presented a series of videos via an eye tracker in which an actor instructed them to manipulate objects in speech-only and speech + pictures conditions. We found no group differences in visual attention to the stimuli. The GDD and TD groups performed better when pictures were available, whereas the ASD group did not. Performance of children with ASD and GDD was positively correlated with visual attention and receptive language. We found no evidence of a prominent visual learning style in the ASD group.

  5. Channel Models for Capacity Evaluation of MIMO Handsets in Data Mode

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Yanakiev, Boyan; Barrio, Samantha Caporal Del

    2017-01-01

    This work investigates different correlation based models useful for evaluation of outage capacity (OC) of mobile multiple-input multiple-output (MIMO) handsets. The work is based on a large measurement campaign in a micro-cellular setup involving two dual-band base stations, 10 different handsets...... in an indoor environment for different use cases and test users. Several models are evaluated statistically, comparing the OC values estimated from the model and measurement data, respectively, for about 2,700 measurement routes. The models are based on either estimates of the full correlation matrices...... or simplifications. Among other results, it is shown that the OC can be predicted accurately (median error typically within 2.6%) with a model assuming knowledge only of the Tx-correlation coefficient and the mean power gain....

  6. I Assumed You Knew: Teaching Assumptions as Co-Equal to Observations in Scientific Work

    Science.gov (United States)

    Horodyskyj, L.; Mead, C.; Anbar, A. D.

    2016-12-01

    Introductory science curricula typically begin with a lesson on the "nature of science". Usually this lesson is short, built with the assumption that students have picked up this information elsewhere and only a short review is necessary. However, when asked about the nature of science in our classes, student definitions were often confused, contradictory, or incomplete. A cursory review of how the nature of science is defined in a number of textbooks is similarly inconsistent and excessively loquacious. With such confusion both from the student and teacher perspective, it is no surprise that students walk away with significant misconceptions about the scientific endeavor, which they carry with them into public life. These misconceptions subsequently result in poor public policy and personal decisions on issues with scientific underpinnings. We will present a new way of teaching the nature of science at the introductory level that better represents what we actually do as scientists. Nature of science lessons often emphasize the importance of observations in scientific work. However, they rarely mention and often hide the importance of assumptions in interpreting those observations. Assumptions are co-equal to observations in building models, which are observation-assumption networks that can be used to make predictions about future observations. The confidence we place in these models depends on whether they are assumption-dominated (hypothesis) or observation-dominated (theory). By presenting and teaching science in this manner, we feel that students will better comprehend the scientific endeavor, since making observations and assumptions and building mental models is a natural human behavior. We will present a model for a science lab activity that can be taught using this approach.

  7. Evaluation of water conservation capacity of loess plateau typical mountain ecosystems based on InVEST model simulation

    Science.gov (United States)

    Lv, Xizhi; Zuo, Zhongguo; Xiao, Peiqing

    2017-06-01

    With increasing demand for water resources and frequently a general deterioration of local water resources, water conservation by forests has received considerable attention in recent years. To evaluate water conservation capacities of different forest ecosystems in mountainous areas of Loess Plateau, the landscape of forests was divided into 18 types in Loess Plateau. Under the consideration of the factors such as climate, topography, plant, soil and land use, the water conservation of the forest ecosystems was estimated by means of InVEST model. The result showed that 486417.7 hm2 forests in typical mountain areas were divided into 18 forest types, and the total water conservation quantity was 1.64×1012m3, equaling an average of water conversation quantity of 9.09×1010m3. There is a great difference in average water conversation capacity among various forest types. The water conservation function and its evaluation is crucial and complicated issues in the study of ecological service function in modern times.

  8. Simplified model for DNB analysis

    International Nuclear Information System (INIS)

    Silva Filho, E.

    1979-08-01

    In a pressurized water nuclear reactor (PWR), the power of operation is restricted by the possibility of the occurrence of the departure from nucleate boiling called DNB (Departure from Nucleate Boiling) in the hottest channel of the core. The present work proposes a simplified model that analyses the thermal-hydraulic conditions of the coolant in the hottest channel of PWRs with the objective to evaluate BNB in this channel. For this the coupling between the hot channel and typical nominal channels assumed imposing the existence of a cross flow between these channels in a way that a uniforme pressure axial distribution results along the channels. The model is applied for Angra-I reactor and the results are compared with those of Final Safety Analysis Report (FSAR) obtained by Westinghouse through the THINC program, beeing considered satisfactory (Author) [pt

  9. Dysfunctional metacognition and drive for thinness in typical and atypical anorexia nervosa.

    Science.gov (United States)

    Davenport, Emily; Rushford, Nola; Soon, Siew; McDermott, Cressida

    2015-01-01

    Anorexia nervosa is complex and difficult to treat. In cognitive therapies the focus has been on cognitive content rather than process. Process-oriented therapies may modify the higher level cognitive processes of metacognition, reported as dysfunctional in adult anorexia nervosa. Their association with clinical features of anorexia nervosa, however, is unclear. With reclassification of anorexia nervosa by DSM-5 into typical and atypical groups, comparability of metacognition and drive for thinness across groups and relationships within groups is also unclear. Main objectives were to determine whether metacognitive factors differ across typical and atypical anorexia nervosa and a non-clinical community sample, and to explore a process model by determining whether drive for thinness is concurrently predicted by metacognitive factors. Women receiving treatment for anorexia nervosa (n = 119) and non-clinical community participants (n = 100), aged between 18 and 46 years, completed the Eating Disorders Inventory (3(rd) Edition) and Metacognitions Questionnaire (Brief Version). Body Mass Index (BMI) of 18.5 kg/m(2) differentiated between typical (n = 75) and atypical (n = 44) anorexia nervosa. Multivariate analyses of variance and regression analyses were conducted. Metacognitive profiles were similar in both typical and atypical anorexia nervosa and confirmed as more dysfunctional than in the non-clinical group. Drive for thinness was concurrently predicted in the typical patients by the metacognitive factors, positive beliefs about worry, and need to control thoughts; in the atypical patients by negative beliefs about worry and, inversely, by cognitive self-consciousness, and in the non-clinical group by cognitive self-consciousness. Despite having a healthier weight, the atypical group was as severely affected by dysfunctional metacognitions and drive for thinness as the typical group. Because metacognition concurrently predicted drive for thinness

  10. Benefit and cost curves for typical pollination mutualisms.

    Science.gov (United States)

    Morris, William F; Vázquez, Diego P; Chacoff, Natacha P

    2010-05-01

    Mutualisms provide benefits to interacting species, but they also involve costs. If costs come to exceed benefits as population density or the frequency of encounters between species increases, the interaction will no longer be mutualistic. Thus curves that represent benefits and costs as functions of interaction frequency are important tools for predicting when a mutualism will tip over into antagonism. Currently, most of what we know about benefit and cost curves in pollination mutualisms comes from highly specialized pollinating seed-consumer mutualisms, such as the yucca moth-yucca interaction. There, benefits to female reproduction saturate as the number of visits to a flower increases (because the amount of pollen needed to fertilize all the flower's ovules is finite), but costs continue to increase (because pollinator offspring consume developing seeds), leading to a peak in seed production at an intermediate number of visits. But for most plant-pollinator mutualisms, costs to the plant are more subtle than consumption of seeds, and how such costs scale with interaction frequency remains largely unknown. Here, we present reasonable benefit and cost curves that are appropriate for typical pollinator-plant interactions, and we show how they can result in a wide diversity of relationships between net benefit (benefit minus cost) and interaction frequency. We then use maximum-likelihood methods to fit net-benefit curves to measures of female reproductive success for three typical pollination mutualisms from two continents, and for each system we chose the most parsimonious model using information-criterion statistics. We discuss the implications of the shape of the net-benefit curve for the ecology and evolution of plant-pollinator mutualisms, as well as the challenges that lie ahead for disentangling the underlying benefit and cost curves for typical pollination mutualisms.

  11. Collisional tearing in a field-reversed sheet pinch assuming nonparallel propagation

    International Nuclear Information System (INIS)

    Quest, K.B.; Coroniti, F.V.

    1985-01-01

    We examine the linear stability properties of the collisional tearing mode in a reversed-field sheet pinch assuming that the wave vector is not parallel to B, where B is the equilibrium magnetic field. We show that pressure balance in the direction of the equilibrium current requires a nonzero perturbed current component deltaJ/sub z/ that is driven toward tyhe center of the pinch. At the center of the pinch, deltaJ/sub z/ goes to zero, and momentum is balanced by coupling to the ion-acoustic mode. In order to achieve current closure, a large perturbed field-aligned current is generated that is strongly localized about the dissipative tearing layer. The relation of this work to the collisionless case is discussed

  12. A model for temperature dependent resistivity of metallic superlattices

    Directory of Open Access Journals (Sweden)

    J. I. Uba

    2015-11-01

    Full Text Available The temperature dependent resistivity of metallic superlattices, to first order approximation, is assumed to have same form as bulk metal, ρ(T = ρo + aT, which permits describing these structures as linear atomic chain. The assumption is, substantiated with the derivation of the above expression from the standard magnetoresistance equation, in which the second term, a Bragg scattering factor, is a correction to the usual model involving magnon and phonon scatterings. Fitting the model to Fe/Cr data from literature shows that Bragg scattering is dominant at T < 50 K and magnon and phonon coefficients are independent of experiment conditions, with typical values of 4.7 × 10−4 μΩcmK−2 and −8 ± 0.7 × 10−7μΩcmK−3. From the linear atomic chain model, the dielectric constant ε q , ω = 8 . 33 × 10 − 2 at Debye frequency for all materials and acoustic speed and Thomas – Fermi screening length are pressure dependent with typical values of 1.53 × 104 m/s and 1.80 × 109 m at 0.5 GPa pressure for an Fe/Cr structure.

  13. Macroscopic diffusion models for precipitation in crystalline gallium arsenide

    Energy Technology Data Exchange (ETDEWEB)

    Kimmerle, Sven-Joachim Wolfgang

    2009-09-21

    Based on a thermodynamically consistent model for precipitation in gallium arsenide crystals including surface tension and bulk stresses by Dreyer and Duderstadt, we propose two different mathematical models to describe the size evolution of liquid droplets in a crystalline solid. The first model treats the diffusion-controlled regime of interface motion, while the second model is concerned with the interface-controlled regime of interface motion. Our models take care of conservation of mass and substance. These models generalise the well-known Mullins- Sekerka model for Ostwald ripening. We concentrate on arsenic-rich liquid spherical droplets in a gallium arsenide crystal. Droplets can shrink or grow with time but the centres of droplets remain fixed. The liquid is assumed to be homogeneous in space. Due to different scales for typical distances between droplets and typical radii of liquid droplets we can derive formally so-called mean field models. For a model in the diffusion-controlled regime we prove this limit by homogenisation techniques under plausible assumptions. These mean field models generalise the Lifshitz-Slyozov-Wagner model, which can be derived from the Mullins-Sekerka model rigorously, and is well understood. Mean field models capture the main properties of our system and are well adapted for numerics and further analysis. We determine possible equilibria and discuss their stability. Numerical evidence suggests in which case which one of the two regimes might be appropriate to the experimental situation. (orig.)

  14. Project of computer program for designing the steel with the assumed CCT diagram

    OpenAIRE

    S. Malara; J. Trzaska; L.A. Dobrzański

    2007-01-01

    Purpose: The aim of this paper was developing a project of computer aided method for designing the chemicalcomposition of steel with the assumed CCT diagram.Design/methodology/approach: The purpose has been achieved in four stages. At the first stage characteristicpoints of CCT diagram have been determined. At the second stage neural networks have been developed, andnext CCT diagram terms of similarity have been worked out- at the third one. In the last one steel chemicalcomposition optimizat...

  15. Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation

    Science.gov (United States)

    Nyabeze, W. R.

    A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.

  16. Regional LLRW processing alternatives applying the DOE REGINALT systems analysis model

    International Nuclear Information System (INIS)

    Beers, G.H.

    1987-01-01

    The DOE Low-Level Waste Management Program has developed a computer-based decision support system of models that may be used by nonprogrammers to evaluate a comprehensive approach to commercial low-level radioactive waste (LLRW) management. REGINALT (Regional Waste Management Alternatives Analysis Model) implementation will be described as the model is applied to hypothetical regional compact for the purpose of examining the technical and economic potential of two waste processing alternatives. Using waste from a typical regional compact, two specific regional waste processing centers are compared for feasibility. Example 1 assumes that a regional supercompaction facility is being developed for the region. Example 2 assumes that a regional facility with both supercompaction and incineration is specified. Both examples include identical disposal facilities, except that capacity may differ due to variation in volume reduction achieved. The two examples are compared with regard to volume reduction achieved, estimated occupational exposure for the processing facilities, and life cycle costs per generated unit waste. A base case also illustrates current disposal practices. The results of the comparisons evaluated, and other steps, if necessary, for additional decision support are identified

  17. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    Science.gov (United States)

    Chaki, Sagar; Gurfinkel, Arie

    2010-01-01

    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules

  18. INVESTIGATION OF SEISMIC PERFORMANCE AND DESIGN OF TYPICAL CURVED AND SKEWED BRIDGES IN COLORADO

    Science.gov (United States)

    2018-01-15

    This report summarizes the analytical studies on the seismic performance of typical Colorado concrete bridges, particularly those with curved and skewed configurations. A set of bridge models with different geometric configurations derived from a pro...

  19. Typical exposure of children to EMF: exposimetry and dosimetry

    International Nuclear Information System (INIS)

    Valic, Blaz; Kos, Bor; Gajsek, Peter

    2015-01-01

    A survey study with portable exposimeters, worn by 21 children under the age of 17, and detailed measurements in an apartment above a transformer substation were carried out to determine the typical individual exposure of children to extremely low- and radio-frequency (RF) electromagnetic field. In total, portable exposimeters were worn for >2400 h. Based on the typical individual exposure the in situ electric field and specific absorption rate (SAR) values were calculated for an 11-y-old female human model. The average exposure was determined to be low compared with ICNIRP reference levels: 0.29 μT for an extremely low frequency (ELF) magnetic field and 0.09 V m -1 for GSM base stations, 0.11 V m -1 for DECT and 0.10 V m -1 for WiFi; other contributions could be neglected. However, some of the volunteers were more exposed: the highest realistic exposure, to which children could be exposed for a prolonged period of time, was 1.35 μT for ELF magnetic field and 0.38 V m -1 for DECT, 0.13 V m -1 for WiFi and 0.26 V m -1 for GSM base stations. Numerical calculations of the in situ electric field and SAR values for the typical and the worst-case situation show that, compared with ICNIRP basic restrictions, the average exposure is low. In the typical exposure scenario, the extremely low frequency exposure is <0.03 % and the RF exposure <0.001 % of the corresponding basic restriction. In the worst-case situation, the extremely low frequency exposure is <0.11 % and the RF exposure <0.007 % of the corresponding basic restrictions. Analysis of the exposures and the individual's perception of being exposed/ unexposed to an ELF magnetic field showed that it is impossible to estimate the individual exposure to an ELF magnetic field based only on the information provided by the individuals, as they do not have enough knowledge and information to properly identify the sources in their vicinity. (authors)

  20. Effects of stress typicality during speeded grammatical classification.

    Science.gov (United States)

    Arciuli, Joanne; Cupples, Linda

    2003-01-01

    The experiments reported here were designed to investigate the influence of stress typicality during speeded grammatical classification of disyllabic English words by native and non-native speakers. Trochaic nouns and iambic gram verbs were considered to be typically stressed, whereas iambic nouns and trochaic verbs were considered to be atypically stressed. Experiments 1a and 2a showed that while native speakers classified typically stressed words individual more quickly and more accurately than atypically stressed words during differences reading, there were no overall effects during classification of spoken stimuli. However, a subgroup of native speakers with high error rates did show a significant effect during classification of spoken stimuli. Experiments 1b and 2b showed that non-native speakers classified typically stressed words more quickly and more accurately than atypically stressed words during reading. Typically stressed words were classified more accurately than atypically stressed words when the stimuli were spoken. Importantly, there was a significant relationship between error rates, vocabulary size and the size of the stress typicality effect in each experiment. We conclude that participants use information about lexical stress to help them distinguish between disyllabic nouns and verbs during speeded grammatical classification. This is especially so for individuals with a limited vocabulary who lack other knowledge (e.g., semantic knowledge) about the differences between these grammatical categories.

  1. Using item response theory to investigate the structure of anticipated affect: do self-reports about future affective reactions conform to typical or maximal models?

    Science.gov (United States)

    Zampetakis, Leonidas A; Lerakis, Manolis; Kafetsios, Konstantinos; Moustakis, Vassilis

    2015-01-01

    In the present research, we used item response theory (IRT) to examine whether effective predictions (anticipated affect) conforms to a typical (i.e., what people usually do) or a maximal behavior process (i.e., what people can do). The former, correspond to non-monotonic ideal point IRT models, whereas the latter correspond to monotonic dominance IRT models. A convenience, cross-sectional student sample (N = 1624) was used. Participants were asked to report on anticipated positive and negative affect around a hypothetical event (emotions surrounding the start of a new business). We carried out analysis comparing graded response model (GRM), a dominance IRT model, against generalized graded unfolding model, an unfolding IRT model. We found that the GRM provided a better fit to the data. Findings suggest that the self-report responses to anticipated affect conform to dominance response process (i.e., maximal behavior). The paper also discusses implications for a growing literature on anticipated affect.

  2. Using item response theory to investigate the structure of anticipated affect: Do self-reports about future affective reactions conform to typical or maximal models?

    Directory of Open Access Journals (Sweden)

    Leonidas A Zampetakis

    2015-09-01

    Full Text Available In the present research we used item response theory (IRT to examine whether effective predictions (anticipated affect conforms to a typical (i.e., what people usually do or a maximal behavior process (i.e., what people can do. The former, correspond to non-monotonic ideal point IRT models whereas the latter correspond to monotonic dominance IRT models. A convenience, cross-sectional student sample (N=1624 was used. Participants were asked to report on anticipated positive and negative affect around a hypothetical event (emotions surrounding the start of a new business. We carried out analysis comparing Graded Response Model (GRM, a dominance IRT model, against Generalized Graded Unfolding Model (GGUM, an unfolding IRT model. We found that the GRM provided a better fit to the data. Findings suggest that the self-report responses to anticipated affect conform to dominance response process (i.e. maximal behavior. The paper also discusses implications for a growing literature on anticipated affect.

  3. Portion distortion: typical portion sizes selected by young adults.

    Science.gov (United States)

    Schwartz, Jaime; Byrd-Bredbenner, Carol

    2006-09-01

    The incidence of obesity has increased in parallel with increasing portion sizes of individually packaged and ready-to-eat prepared foods as well as foods served at restaurants. Portion distortion (perceiving large portion sizes as appropriate amounts to eat at a single eating occasion) may contribute to increasing energy intakes and expanding waistlines. The purpose of this study was to determine typical portion sizes that young adults select, how typical portion sizes compare with reference portion sizes (based in this study on the Nutrition Labeling and Education Act's quantities of food customarily eaten per eating occasion), and whether the size of typical portions has changed over time. Young adults (n=177, 75% female, age range 16 to 26 years) at a major northeastern university. Participants served themselves typical portion sizes of eight foods at breakfast (n=63) or six foods at lunch or dinner (n=62, n=52, respectively). Typical portion-size selections were unobtrusively weighed. A unit score was calculated by awarding 1 point for each food with a typical portion size that was within 25% larger or smaller than the reference portion; larger or smaller portions were given 0 points. Thus, each participant's unit score could range from 0 to 8 at breakfast or 0 to 6 at lunch and dinner. Analysis of variance or t tests were used to determine whether typical and reference portion sizes differed, and whether typical portion sizes changed over time. Mean unit scores (+/-standard deviation) were 3.63+/-1.27 and 1.89+/-1.14, for breakfast and lunch/dinner, respectively, indicating little agreement between typical and reference portion sizes. Typical portions sizes in this study tended to be significantly different from those selected by young adults in a similar study conducted 2 decades ago. Portion distortion seems to affect the portion sizes selected by young adults for some foods. This phenomenon has the potential to hinder weight loss, weight maintenance, and

  4. A Simple Accounting-based Valuation Model for the Debt Tax Shield

    Directory of Open Access Journals (Sweden)

    Andreas Scholze

    2010-05-01

    Full Text Available This paper describes a simple way to integrate the debt tax shield into an accounting-based valuation model. The market value of equity is determined by forecasting residual operating income, which is calculated by charging operating income for the operating assets at a required return that accounts for the tax benefit that comes from borrowing to raise cash for the operations. The model assumes that the firm maintains a deterministic financial leverage ratio, which tends to converge quickly to typical steady-state levels over time. From a practical point of view, this characteristic is of particular help, because it allows a continuing value calculation at the end of a short forecast period.

  5. Toddlers' categorization of typical and scrambled dolls and cars.

    Science.gov (United States)

    Heron, Michelle; Slaughter, Virginia

    2008-09-01

    Previous research has demonstrated discrimination of scrambled from typical human body shapes at 15-18 months of age [Slaughter, V., & Heron, M. (2004). Origins and early development of human body knowledge. Monographs of the Society for Research in Child Development, 69]. In the current study 18-, 24- and 30-month-old infants were presented with four typical and four scrambled dolls in a sequential touching procedure, to assess the development of explicit categorization of human body shapes. Infants were also presented with typical and scrambled cars, allowing comparison of infants' categorization of scrambled and typical exemplars in a different domain. Spontaneous comments regarding category membership were recorded. Girls categorized dolls and cars as typical or scrambled at 30 months, whereas boys only categorized the cars. Earliest categorization was for typical and scrambled cars, at 24 months, but only for boys. Language-based knowledge, coded from infants' comments, followed the same pattern. This suggests that human body knowledge does not have privileged status in infancy. Gender differences in performance are discussed.

  6. ZMOTTO- MODELING THE INTERNAL COMBUSTION ENGINE

    Science.gov (United States)

    Zeleznik, F. J.

    1994-01-01

    The ZMOTTO program was developed to model mathematically a spark-ignited internal combustion engine. ZMOTTO is a large, general purpose program whose calculations can be established at five levels of sophistication. These five models range from an ideal cycle requiring only thermodynamic properties, to a very complex representation demanding full combustion kinetics, transport properties, and poppet valve flow characteristics. ZMOTTO is a flexible and computationally economical program based on a system of ordinary differential equations for cylinder-averaged properties. The calculations assume that heat transfer is expressed in terms of a heat transfer coefficient and that the cylinder average of kinetic plus potential energies remains constant. During combustion, the pressures of burned and unburned gases are assumed equal and their heat transfer areas are assumed proportional to their respective mass fractions. Even the simplest ZMOTTO model provides for residual gas effects, spark advance, exhaust gas recirculation, supercharging, and throttling. In the more complex models, 1) finite rate chemistry replaces equilibrium chemistry in descriptions of both the flame and the burned gases, 2) poppet valve formulas represent fluid flow instead of a zero pressure drop flow, and 3) flame propagation is modeled by mass burning equations instead of as an instantaneous process. Input to ZMOTTO is determined by the model chosen. Thermodynamic data is required for all models. Transport properties and chemical kinetics data are required only as the model complexity grows. Other input includes engine geometry, working fluid composition, operating characteristics, and intake/exhaust data. ZMOTTO accommodates a broad spectrum of reactants. The program will calculate many Otto cycle performance parameters for a number of consecutive cycles (a cycle being an interval of 720 crankangle degrees). A typical case will have a number of initial ideal cycles and progress through levels

  7. Exploration of Rice Husk Compost as an Alternate Organic Manure to Enhance the Productivity of Blackgram in Typic Haplustalf and Typic Rhodustalf

    Directory of Open Access Journals (Sweden)

    Subramanium Thiyageshwari

    2018-02-01

    Full Text Available The present study was aimed at using cellulolytic bacterium Enhydrobacter and fungi Aspergillus sp. for preparing compost from rice husk (RH. Further, the prepared compost was tested for their effect on blackgram growth promotion along with different levels of recommended dose of fertilizer (RDF in black soil (typic Haplustalf and red soil (typic Rhodustalf soil. The results revealed that, inoculation with lignocellulolytic fungus (LCF Aspergillus sp. @ 2% was considered as the most efficient method of composting within a short period. Characterization of composted rice husk (CRH was examined through scanning electron microscope (SEM for identifying significant structural changes. At the end of composting, N, P and K content increased with decrease in CO2 evolution, C:N and C:P ratios. In comparison to inorganic fertilization, an increase in grain yield of 16% in typic Haplustalf and 17% in typic Rhodustalf soil over 100% RDF was obtained from the integrated application of CRH@ 5 t ha−1 with 50% RDF and biofertilizers. The crude protein content was maximum with the combined application of CRH, 50% RDF and biofertilizers of 20% and 21% in typic Haplustalf and typic Rhodustalf soils, respectively. Nutrient rich CRH has proved its efficiency on crop growth and soil fertility.

  8. [Typical atrial flutter: Diagnosis and therapy].

    Science.gov (United States)

    Thomas, Dierk; Eckardt, Lars; Estner, Heidi L; Kuniss, Malte; Meyer, Christian; Neuberger, Hans-Ruprecht; Sommer, Philipp; Steven, Daniel; Voss, Frederik; Bonnemeier, Hendrik

    2016-03-01

    Typical, cavotricuspid-dependent atrial flutter is the most common atrial macroreentry tachycardia. The incidence of atrial flutter (typical and atypical forms) is age-dependent with 5/100,000 in patients less than 50 years and approximately 600/100,000 in subjects > 80 years of age. Concomitant heart failure or pulmonary disease further increases the risk of typical atrial flutter.Patients with atrial flutter may present with symptoms of palpitations, reduced exercise capacity, chest pain, or dyspnea. The risk of thromboembolism is probably similar to atrial fibrillation; therefore, the same antithrombotic prophylaxis is required in atrial flutter patients. Acutely symptomatic cases may be subjected to cardioversion or pharmacologic rate control to relieve symptoms. Catheter ablation of the cavotricuspid isthmus represents the primary choice in long-term therapy, associated with high procedural success (> 97 %) and low complication rates (0.5 %).This article represents the third part of a manuscript series designed to improve professional education in the field of cardiac electrophysiology. Mechanistic and clinical characteristics as well as management of isthmus-dependent atrial flutter are described in detail. Electrophysiological findings and catheter ablation of the arrhythmia are highlighted.

  9. Time-dependent inhomogeneous jet models for BL Lac objects

    Science.gov (United States)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-05-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  10. ANALYTICAL MODEL OF DAMAGED AIRCRAFT SKIN BONDED REPAIRS ASSUMING THE MATERIAL PROPERTIES DEGRADATION

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available The search of optimal variants for composite repair patches allows to increase the service life of a damaged air- plane structure. To sensibly choose the way of repair, it is necessary to have a computational complex to predict the stress- strain condition of "structure-adhesive-patch" system and to take into account the damage growth considering the material properties change. The variant of the computational complex based on inclusion method is proposed.For calculation purposes the repair bonded joint is divided into two areas: a metal plate with patch-shaped hole and a "patch-adhesive layer-skin" composite plate (inclusion.Calculation stages:Evaluation of the patch influence to the skin stress-strain condition, stress distribution between skin and patch in the case of no damage. Calculation of the stress-strain condition is performed separately for the skin with hole and for the inclusion; solutions are coupled based on strain compatibility.Definition of the damage growth parameters at new stress-strain condition due to bonded patch existence. Skincrack stress intensity factors are found to identify the crack growth velocity. Patch is modelled as a set of "springs" bridging the crack.Degradation analysis of elasticity properties for the patch material.Repair effectiveness is evaluated with respect to crack growth velocity reduction in the initial material in compari- son with the case of the patch absence.Calculation example for the crack repair effectiveness depending on number of loading cycles for the 7075-T6 aluminum skin is given. Repair patches are carbon-epoxy, glass-epoxy and boron-epoxy material systems with quasi- isotropic layup and GLARE hybrid metal-polymeric material.The analysis shows the high effectiveness of the carbon-epoxy patch. Due to low stiffness, the glass-epoxy patchdemonstrates the least effectiveness. GLARE patch containing the fiberglass plies oriented across the crack has the same effectiveness as the carbon and

  11. Contribution of milk production to global greenhouse gas emissions. An estimation based on typical farms.

    Science.gov (United States)

    Hagemann, Martin; Ndambi, Asaah; Hemme, Torsten; Latacz-Lohmann, Uwe

    2012-02-01

    Studies on the contribution of milk production to global greenhouse gas (GHG) emissions are rare (FAO 2010) and often based on crude data which do not appropriately reflect the heterogeneity of farming systems. This article estimates GHG emissions from milk production in different dairy regions of the world based on a harmonised farm data and assesses the contribution of milk production to global GHG emissions. The methodology comprises three elements: (1) the International Farm Comparison Network (IFCN) concept of typical farms and the related globally standardised dairy model farms representing 45 dairy regions in 38 countries; (2) a partial life cycle assessment model for estimating GHG emissions of the typical dairy farms; and (3) standard regression analysis to estimate GHG emissions from milk production in countries for which no typical farms are available in the IFCN database. Across the 117 typical farms in the 38 countries analysed, the average emission rate is 1.50 kg CO(2) equivalents (CO(2)-eq.)/kg milk. The contribution of milk production to the global anthropogenic emissions is estimated at 1.3 Gt CO(2)-eq./year, accounting for 2.65% of total global anthropogenic emissions (49 Gt; IPCC, Synthesis Report for Policy Maker, Valencia, Spain, 2007). We emphasise that our estimates of the contribution of milk production to global GHG emissions are subject to uncertainty. Part of the uncertainty stems from the choice of the appropriate methods for estimating emissions at the level of the individual animal.

  12. A physical probabilistic model to predict failure rates in buried PVC pipelines

    International Nuclear Information System (INIS)

    Davis, P.; Burn, S.; Moglia, M.; Gould, S.

    2007-01-01

    For older water pipeline materials such as cast iron and asbestos cement, future pipe failure rates can be extrapolated from large volumes of existing historical failure data held by water utilities. However, for newer pipeline materials such as polyvinyl chloride (PVC), only limited failure data exists and confident forecasts of future pipe failures cannot be made from historical data alone. To solve this problem, this paper presents a physical probabilistic model, which has been developed to estimate failure rates in buried PVC pipelines as they age. The model assumes that under in-service operating conditions, crack initiation can occur from inherent defects located in the pipe wall. Linear elastic fracture mechanics theory is used to predict the time to brittle fracture for pipes with internal defects subjected to combined internal pressure and soil deflection loading together with through-wall residual stress. To include uncertainty in the failure process, inherent defect size is treated as a stochastic variable, and modelled with an appropriate probability distribution. Microscopic examination of fracture surfaces from field failures in Australian PVC pipes suggests that the 2-parameter Weibull distribution can be applied. Monte Carlo simulation is then used to estimate lifetime probability distributions for pipes with internal defects, subjected to typical operating conditions. As with inherent defect size, the 2-parameter Weibull distribution is shown to be appropriate to model uncertainty in predicted pipe lifetime. The Weibull hazard function for pipe lifetime is then used to estimate the expected failure rate (per pipe length/per year) as a function of pipe age. To validate the model, predicted failure rates are compared to aggregated failure data from 17 UK water utilities obtained from the United Kingdom Water Industry Research (UKWIR) National Mains Failure Database. In the absence of actual operating pressure data in the UKWIR database, typical

  13. Perception of similarity: a model for social network dynamics

    International Nuclear Information System (INIS)

    Javarone, Marco Alberto; Armano, Giuliano

    2013-01-01

    Some properties of social networks (e.g., the mixing patterns and the community structure) appear deeply influenced by the individual perception of people. In this work we map behaviors by considering similarity and popularity of people, also assuming that each person has his/her proper perception and interpretation of similarity. Although investigated in different ways (depending on the specific scientific framework), from a computational perspective similarity is typically calculated as a distance measure. In accordance with this view, to represent social network dynamics we developed an agent-based model on top of a hyperbolic space on which individual distance measures are calculated. Simulations, performed in accordance with the proposed model, generate small-world networks that exhibit a community structure. We deem this model to be valuable for analyzing the relevant properties of real social networks. (paper)

  14. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal environmental...

  15. Time to discontinuation of atypical versus typical antipsychotics in the naturalistic treatment of schizophrenia

    Directory of Open Access Journals (Sweden)

    Swartz Marvin

    2006-02-01

    Full Text Available Abstract Background There is an ongoing debate over whether atypical antipsychotics are more effective than typical antipsychotics in the treatment of schizophrenia. This naturalistic study compares atypical and typical antipsychotics on time to all-cause medication discontinuation, a recognized index of medication effectiveness in the treatment of schizophrenia. Methods We used data from a large, 3-year, observational, non-randomized, multisite study of schizophrenia, conducted in the U.S. between 7/1997 and 9/2003. Patients who were initiated on oral atypical antipsychotics (clozapine, olanzapine, risperidone, quetiapine, or ziprasidone or oral typical antipsychotics (low, medium, or high potency were compared on time to all-cause medication discontinuation for 1 year following initiation. Treatment group comparisons were based on treatment episodes using 3 statistical approaches (Kaplan-Meier survival analysis, Cox Proportional Hazards regression model, and propensity score-adjusted bootstrap resampling methods. To further assess the robustness of the findings, sensitivity analyses were performed, including the use of (a only 1 medication episode for each patient, the one with which the patient was treated first, and (b all medication episodes, including those simultaneously initiated on more than 1 antipsychotic. Results Mean time to all-cause medication discontinuation was longer on atypical (N = 1132, 256.3 days compared to typical antipsychotics (N = 534, 197.2 days; p Conclusion In the usual care of schizophrenia patients, time to medication discontinuation for any cause appears significantly longer for atypical than typical antipsychotics regardless of the typical antipsychotic potency level. Findings were primarily driven by clozapine and olanzapine, and to a lesser extent by risperidone. Furthermore, only clozapine and olanzapine therapy showed consistently and significantly longer treatment duration compared to perphenazine, a medium

  16. Generation of typical meteorological year for different climates of China

    International Nuclear Information System (INIS)

    Jiang, Yingni

    2010-01-01

    Accurate prediction of building energy performance requires precise information of the local climate. Typical weather year files like typical meteorological year (TMY) are commonly used in building simulation. They are also essential for numerical analysis of sustainable and renewable energy systems. The present paper presents the generation of typical meteorological year (TMY) for eight typical cities representing the major climate zones of China. The data set, which includes global solar radiation data and other meteorological parameters referring to dry bulb temperature, relative humidity, wind speed, has been analyzed. The typical meteorological year is generated from the available meteorological data recorded during the period 1995-2004, using the Finkelstein-Schafer statistical method. The cumulative distribution function (CDF) for each year is compared with the CDF for the long-term composite of all the years in the period. Typical months for each of the 12 calendar months from the period of years are selected by choosing the one with the smallest deviation from the long-term CDF. The 12 typical months selected from the different years are used for the formulation of a TMY.

  17. Assessment of the quality seen in a restaurant typical theme

    Directory of Open Access Journals (Sweden)

    Francisco Alves Pinheiro

    2009-03-01

    Full Text Available To ensure the satisfaction of external customers it is necessary to know their needs. In this perspective, these work objectives assess the perception of quality by the customer outside of a restaurant located in the a restaurant typical theme located in the square of food “Bodódromo” the city of Petrolina/Pe. For both this was a case study, using the model servqual, Parasuraman et al (1985, for removal of information. The results indicated a need for improvement in the services provided by the restaurant.

  18. Modelling of the reactive sputtering process with non-uniform discharge current density and different temperature conditions

    International Nuclear Information System (INIS)

    Vasina, P; Hytkova, T; Elias, M

    2009-01-01

    The majority of current models of the reactive magnetron sputtering assume a uniform shape of the discharge current density and the same temperature near the target and the substrate. However, in the real experimental set-up, the presence of the magnetic field causes high density plasma to form in front of the cathode in the shape of a toroid. Consequently, the discharge current density is laterally non-uniform. In addition to this, the heating of the background gas by sputtered particles, which is usually referred to as the gas rarefaction, plays an important role. This paper presents an extended model of the reactive magnetron sputtering that assumes the non-uniform discharge current density and which accommodates the gas rarefaction effect. It is devoted mainly to the study of the behaviour of the reactive sputtering rather that to the prediction of the coating properties. Outputs of this model are compared with those that assume uniform discharge current density and uniform temperature profile in the deposition chamber. Particular attention is paid to the modelling of the radial variation of the target composition near transitions from the metallic to the compound mode and vice versa. A study of the target utilization in the metallic and compound mode is performed for two different discharge current density profiles corresponding to typical two pole and multipole magnetics available on the market now. Different shapes of the discharge current density were tested. Finally, hysteresis curves are plotted for various temperature conditions in the reactor.

  19. Phylogenetic relationships of typical antbirds (Thamnophilidae and test of incongruence based on Bayes factors

    Directory of Open Access Journals (Sweden)

    Nylander Johan AA

    2004-07-01

    Full Text Available Abstract Background The typical antbirds (Thamnophilidae form a monophyletic and diverse family of suboscine passerines that inhabit neotropical forests. However, the phylogenetic relationships within this assemblage are poorly understood. Herein, we present a hypothesis of the generic relationships of this group based on Bayesian inference analyses of two nuclear introns and the mitochondrial cytochrome b gene. The level of phylogenetic congruence between the individual genes has been investigated utilizing Bayes factors. We also explore how changes in the substitution models affected the observed incongruence between partitions of our data set. Results The phylogenetic analysis supports both novel relationships, as well as traditional groupings. Among the more interesting novel relationship suggested is that the Terenura antwrens, the wing-banded antbird (Myrmornis torquata, the spot-winged antshrike (Pygiptila stellaris and the russet antshrike (Thamnistes anabatinus are sisters to all other typical antbirds. The remaining genera fall into two major clades. The first includes antshrikes, antvireos and the Herpsilochmus antwrens, while the second clade consists of most antwren genera, the Myrmeciza antbirds, the "professional" ant-following antbirds, and allied species. Our results also support previously suggested polyphyly of Myrmotherula antwrens and Myrmeciza antbirds. The tests of phylogenetic incongruence, using Bayes factors, clearly suggests that allowing the gene partitions to have separate topology parameters clearly increased the model likelihood. However, changing a component of the nucleotide substitution model had much higher impact on the model likelihood. Conclusions The phylogenetic results are in broad agreement with traditional classification of the typical antbirds, but some relationships are unexpected based on external morphology. In these cases their true affinities may have been obscured by convergent evolution and

  20. Electric Power Distribution System Model Simplification Using Segment Substitution

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2018-05-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  1. Electric Power Distribution System Model Simplification Using Segment Substitution

    International Nuclear Information System (INIS)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; Reed, Gregory F.

    2017-01-01

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers model bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.

  2. Economic and environmental assessment of cellulosic ethanol production scenarios annexed to a typical sugar mill.

    Science.gov (United States)

    Ali Mandegari, Mohsen; Farzad, Somayeh; Görgens, Johann F

    2017-01-01

    In this work different biorefinery scenarios were investigated, concerning the co-production of bioethanol and electricity from available lignocellulose at a typical sugar mill, as possible extensions to the current combustion of bagasse for steam and electricity production and burning trash on-filed. In scenario 1, the whole bagasse and brown leaves is utilized in a biorefinery and coal is burnt in the existing inefficient sugar mill boiler. Scenario 2 & 3 are assumed with a new centralized CHP unit without/with coal co-combustion, respectively. Also, through scenarios 4 & 5, the effect of water insoluble loading were studied. All scenarios provided energy for the sugarmill and the ethanol plant, with the export of surplus electricity. Economic analysis determined that scenario 1 was the most viable scenario due to less capital cost and economies-of scale. Based on Life Cycle Assessment (LCA) results, scenario 2 outperformed the other scenarios, while three scenarios showed lower contribution to environmental burdens than the current situation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Fitting measurement models to vocational interest data: are dominance models ideal?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz; Rounds, James; Williams, Bruce A

    2009-09-01

    In this study, the authors examined the item response process underlying 3 vocational interest inventories: the Occupational Preference Inventory (C.-P. Deng, P. I. Armstrong, & J. Rounds, 2007), the Interest Profiler (J. Rounds, T. Smith, L. Hubert, P. Lewis, & D. Rivkin, 1999; J. Rounds, C. M. Walker, et al., 1999), and the Interest Finder (J. E. Wall & H. E. Baker, 1997; J. E. Wall, L. L. Wise, & H. E. Baker, 1996). Item response theory (IRT) dominance models, such as the 2-parameter and 3-parameter logistic models, assume that item response functions (IRFs) are monotonically increasing as the latent trait increases. In contrast, IRT ideal point models, such as the generalized graded unfolding model, have IRFs that peak where the latent trait matches the item. Ideal point models are expected to fit better because vocational interest inventories ask about typical behavior, as opposed to requiring maximal performance. Results show that across all 3 interest inventories, the ideal point model provided better descriptions of the response process. The importance of specifying the correct item response model for precise measurement is discussed. In particular, scores computed by a dominance model were shown to be sometimes illogical: individuals endorsing mostly realistic or mostly social items were given similar scores, whereas scores based on an ideal point model were sensitive to which type of items respondents endorsed.

  4. 12 CFR 408.6 - Typical classes of action.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Typical classes of action. 408.6 Section 408.6 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES PROCEDURES FOR COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT Eximbank Implementing Procedures § 408.6 Typical classes of action. (a) Section 1507.3...

  5. Foods Inducing Typical Gastroesophageal Reflux Disease Symptoms in Korea

    OpenAIRE

    Choe, Jung Wan; Joo, Moon Kyung; Kim, Hyo Jung; Lee, Beom Jae; Kim, Ji Hoon; Yeon, Jong Eun; Park, Jong-Jae; Kim, Jae Seon; Byun, Kwan Soo; Bak, Young-Tae

    2017-01-01

    Background/Aims Several specific foods are known to precipitate gastroesophageal reflux disease (GERD) symptoms and GERD patients are usually advised to avoid such foods. However, foods consumed daily are quite variable according to regions, cultures, etc. This study was done to elucidate the food items which induce typical GERD symptoms in Korean patients. Methods One hundred and twenty-six Korean patients with weekly typical GERD symptoms were asked to mark all food items that induced typic...

  6. Study on the three-station typical network deployments of workspace Measurement and Positioning System

    Science.gov (United States)

    Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.

    2013-10-01

    As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.

  7. A Chemo-Mechanical Model of Diffusion in Reactive Systems

    Directory of Open Access Journals (Sweden)

    Kerstin Weinberg

    2018-02-01

    Full Text Available The functional properties of multi-component materials are often determined by a rearrangement of their different phases and by chemical reactions of their components. In this contribution, a material model is presented which enables computational simulations and structural optimization of solid multi-component systems. Typical Systems of this kind are anodes in batteries, reactive polymer blends and propellants. The physical processes which are assumed to contribute to the microstructural evolution are: (i particle exchange and mechanical deformation; (ii spinodal decomposition and phase coarsening; (iii chemical reactions between the components; and (iv energetic forces associated with the elastic field of the solid. To illustrate the capability of the deduced coupled field model, three-dimensional Non-Uniform Rational Basis Spline (NURBS based finite element simulations of such multi-component structures are presented.

  8. A nodal model for the simulation of a PWR core

    International Nuclear Information System (INIS)

    Souza Pinto, R. de.

    1981-06-01

    A computer program FORTRAN language was developed to simulate the neutronic and thermal-hydraulic transient behaviour of a PWR reactor core. The reator power is calculated using a point kinectics model with six groups of delayed neutron precursors. The fission product decay heat was considered assuming three effective decay heat groups. A nodal model was employed for the treatment of heat transfer in the fuel rod, with integration of the heat equation by the lumped parameter technique. Axial conduction was neglected. A single-channel nodal model was developed for the thermo-hydrodynamic simulation using mass and energy conservation equations for the control volumes. The effect of the axial pressure variation was neglected. The computer program was tested, with good results, through the simulation of the transient behaviour of postulated accidents in a typical PWR. (Author) [pt

  9. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    Science.gov (United States)

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  10. The sensitivity of flowline models of tidewater glaciers to parameter uncertainty

    Directory of Open Access Journals (Sweden)

    E. M. Enderlin

    2013-10-01

    Full Text Available Depth-integrated (1-D flowline models have been widely used to simulate fast-flowing tidewater glaciers and predict change because the continuous grounding line tracking, high horizontal resolution, and physically based calving criterion that are essential to realistic modeling of tidewater glaciers can easily be incorporated into the models while maintaining high computational efficiency. As with all models, the values for parameters describing ice rheology and basal friction must be assumed and/or tuned based on observations. For prognostic studies, these parameters are typically tuned so that the glacier matches observed thickness and speeds at an initial state, to which a perturbation is applied. While it is well know that ice flow models are sensitive to these parameters, the sensitivity of tidewater glacier models has not been systematically investigated. Here we investigate the sensitivity of such flowline models of outlet glacier dynamics to uncertainty in three key parameters that influence a glacier's resistive stress components. We find that, within typical observational uncertainty, similar initial (i.e., steady-state glacier configurations can be produced with substantially different combinations of parameter values, leading to differing transient responses after a perturbation is applied. In cases where the glacier is initially grounded near flotation across a basal over-deepening, as typically observed for rapidly changing glaciers, these differences can be dramatic owing to the threshold of stability imposed by the flotation criterion. The simulated transient response is particularly sensitive to the parameterization of ice rheology: differences in ice temperature of ~ 2 °C can determine whether the glaciers thin to flotation and retreat unstably or remain grounded on a marine shoal. Due to the highly non-linear dependence of tidewater glaciers on model parameters, we recommend that their predictions are accompanied by

  11. Running vacuum cosmological models: linear scalar perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: elduartep@usp.br, E-mail: tamayo@if.usp.br [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)

    2017-08-01

    In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - ρ-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density ρ-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely ρ-bar {sub Λ} = Σ {sub i} ρ-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.

  12. Beyond an Assumed Mother-Child Symbiosis in Nutritional Guidelines: The Everyday Reasoning behind Complementary Feeding Decisions

    Science.gov (United States)

    Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte

    2014-01-01

    Researchers question the implications of the way in which "motherhood" is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs of the child and the interest and focus of the…

  13. Far-infrared irradiation drying behavior of typical biomass briquettes

    International Nuclear Information System (INIS)

    Chen, N.N.; Chen, M.Q.; Fu, B.A.; Song, J.J.

    2017-01-01

    Infrared radiation drying behaviors of four typical biomass briquettes (populus tomentosa leaves, cotton stalk, spent coffee grounds and eucalyptus bark) were investigated based on a lab-scale setup. The effect of radiation source temperatures (100–200 °C) on the far-infrared drying kinetics and heat transfer of the samples was addressed. As the temperature went up from 100 °C to 200 °C, the time required for the four biomass briquettes drying decreased by about 59–66%, and the average values of temperature for the four biomass briquettes increased by about 33–39 °C, while the average radiation heat transfer fluxes increased by about 3.3 times (3.7 times only for the leaves). The specific energy consumptions were 0.622–0.849 kW h kg"−"1. The Modified Midilli model had the better representing for the moisture ratio change of the briquettes. The values of the activation energy for the briquettes in the first falling rate stage were between 20.35 and 24.83 kJ mol"−"1, while those in the second falling rate stage were between 17.89 and 21.93 kJ mol"−"1. The activation energy for the eucalyptus bark briquette in two falling rate stages was the least one, and that for the cotton stalk briquette was less than that for the rest two briquettes. - Highlights: • Far infrared drying behaviors of four typical biomass briquettes were addressed. • The effect of radiation source temperatures on IR drying kinetics was stated. • Radiation heat transfer flux between the sample and heater was evaluated. • Midilli model had the better representing for the drying process of the samples.

  14. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  15. Rapidly re-computable EEG (electroencephalography) forward models for realistic head shapes

    International Nuclear Information System (INIS)

    Ermer, J.J.; Mosher, J.C.; Baillet, S.; Leahy, R.M.

    2001-01-01

    Solution of the EEG source localization (inverse) problem utilizing model-based methods typically requires a significant number of forward model evaluations. For subspace based inverse methods like MUSIC (6), the total number of forward model evaluations can often approach an order of 10 3 or 10 4 . Techniques based on least-squares minimization may require significantly more evaluations. The observed set of measurements over an M-sensor array is often expressed as a linear forward spatio-temporal model of the form: F = GQ + N (1) where the observed forward field F (M-sensors x N-time samples) can be expressed in terms of the forward model G, a set of dipole moment(s) Q (3xP-dipoles x N-time samples) and additive noise N. Because of their simplicity, ease of computation, and relatively good accuracy, multi-layer spherical models (7) (or fast approximations described in (1), (7)) have traditionally been the 'forward model of choice' for approximating the human head. However, approximation of the human head via a spherical model does have several key drawbacks. By its very shape, the use of a spherical model distorts the true distribution of passive currents in the skull cavity. Spherical models also require that the sensor positions be projected onto the fitted sphere (Fig. 1), resulting in a distortion of the true sensor-dipole spatial geometry (and ultimately the computed surface potential). The use of a single 'best-fitted' sphere has the added drawback of incomplete coverage of the inner skull region, often ignoring areas such as the frontal cortex. In practice, this problem is typically countered by fitting additional sphere(s) to those region(s) not covered by the primary sphere. The use of these additional spheres results in added complication to the forward model. Using high-resolution spatial information obtained via X-ray CT or MR imaging, a realistic head model can be formed by tessellating the head into a set of contiguous regions (typically the scalp

  16. Tolerance-based interaction: a new model targeting opinion formation and diffusion in social networks

    Directory of Open Access Journals (Sweden)

    Alexandru Topirceanu

    2016-01-01

    Full Text Available One of the main motivations behind social network analysis is the quest for understanding opinion formation and diffusion. Previous models have limitations, as they typically assume opinion interaction mechanisms based on thresholds which are either fixed or evolve according to a random process that is external to the social agent. Indeed, our empirical analysis on large real-world datasets such as Twitter, Meme Tracker, and Yelp, uncovers previously unaccounted for dynamic phenomena at population-level, namely the existence of distinct opinion formation phases and social balancing. We also reveal that a phase transition from an erratic behavior to social balancing can be triggered by network topology and by the ratio of opinion sources. Consequently, in order to build a model that properly accounts for these phenomena, we propose a new (individual-level opinion interaction model based on tolerance. As opposed to the existing opinion interaction models, the new tolerance model assumes that individual’s inner willingness to accept new opinions evolves over time according to basic human traits. Finally, by employing discrete event simulation on diverse social network topologies, we validate our opinion interaction model and show that, although the network size and opinion source ratio are important, the phase transition to social balancing is mainly fostered by the democratic structure of the small-world topology.

  17. 24 CFR 1000.24 - If an Indian tribe assumes environmental review responsibility, how will HUD assist the Indian...

    Science.gov (United States)

    2010-04-01

    ...? 1000.24 Section 1000.24 Housing and Urban Development Regulations Relating to Housing and Urban... URBAN DEVELOPMENT NATIVE AMERICAN HOUSING ACTIVITIES General § 1000.24 If an Indian tribe assumes...

  18. Amyloid β Enhances Typical Rodent Behavior While It Impairs Contextual Memory Consolidation.

    Science.gov (United States)

    Salgado-Puga, Karla; Prado-Alcalá, Roberto A; Peña-Ortega, Fernando

    2015-01-01

    Alzheimer's disease (AD) is associated with an early hippocampal dysfunction, which is likely induced by an increase in soluble amyloid beta peptide (Aβ). This hippocampal failure contributes to the initial memory deficits observed both in patients and in AD animal models and possibly to the deterioration in activities of daily living (ADL). One typical rodent behavior that has been proposed as a hippocampus-dependent assessment model of ADL in mice and rats is burrowing. Despite the fact that AD transgenic mice show some evidence of reduced burrowing, it has not been yet determined whether or not Aβ can affect this typical rodent behavior and whether this alteration correlates with the well-known Aβ-induced memory impairment. Thus, the purpose of this study was to test whether or not Aβ affects burrowing while inducing hippocampus-dependent memory impairment. Surprisingly, our results show that intrahippocampal application of Aβ increases burrowing while inducing memory impairment. We consider that this Aβ-induced increase in burrowing might be associated with a mild anxiety state, which was revealed by increased freezing behavior in the open field, and conclude that Aβ-induced hippocampal dysfunction is reflected in the impairment of ADL and memory, through mechanisms yet to be determined.

  19. Early Freezing of Gait: Atypical versus Typical Parkinson Disorders

    Directory of Open Access Journals (Sweden)

    Abraham Lieberman

    2015-01-01

    Full Text Available In 18 months, 850 patients were referred to Muhammad Ali Parkinson Center (MAPC. Among them, 810 patients had typical Parkinson disease (PD and 212 had PD for ≤5 years. Among the 212 patients with early PD, 27 (12.7% had freezing of gait (FOG. Forty of the 850 had atypical parkinsonism. Among these 40 patients, all of whom had symptoms for ≤5 years, 12 (30.0% had FOG. FOG improved with levodopa in 21/27 patients with typical PD but did not improve in the 12 patients with atypical parkinsonism. FOG was associated with falls in both groups of patients. We believe that FOG unresponsive to levodopa in typical PD resembles FOG in atypical parkinsonism. We thus compared the 6 typical PD patients with FOG unresponsive to levodopa plus the 12 patients with atypical parkinsonism with the 21 patients with typical PD responsive to levodopa. We compared them by tests of locomotion and postural stability. Among the patients with FOG unresponsive to levodopa, postural stability was more impaired than locomotion. This finding leads us to believe that, in these patients, postural stability, not locomotion, is the principal problem underlying FOG.

  20. Generation of a typical meteorological year for Hong Kong

    International Nuclear Information System (INIS)

    Chan, Apple L.S.; Chow, T.T.; Fong, Square K.F.; Lin, John Z.

    2006-01-01

    Weather data can vary significantly from year to year. There is a need to derive typical meteorological year (TMY) data to represent the long-term typical weather condition over a year, which is one of the crucial factors for successful building energy simulation. In this paper, various types of typical weather data sets including the TMY, TMY2, WYEC, WYEC2, WYEC2W, WYEC2T and IWEC were reviewed. The Finkelstein-Schafer statistical method was applied to analyze the hourly measured weather data of a 25-year period (1979-2003) in Hong Kong and select representative typical meteorological months (TMMs). The cumulative distribution function (CDF) for each year was compared with the CDF for the long-term composite of all the years in the period for four major weather indices including dry bulb temperature, dew point temperature, wind speed and solar radiation. Typical months for each of the 12 calendar months from the period of years were selected by choosing the one with the smallest deviation from the long-term CDF. The 12 TMMs selected from the different years were used for formulation of a TMY for Hong Kong

  1. Stored object knowledge and the production of referring expressions: The case of color typicality

    Directory of Open Access Journals (Sweden)

    Hans eWesterbeek

    2015-07-01

    Full Text Available When speakers describe objects with atypical properties, do they include these properties in their referring expressions, even when that is not strictly required for unique referent identification? Based on previous work, we predict that speakers mention the color of a target object more often when the object is atypically colored, compared to when it is typical. Taking literature from object recognition and visual attention into account, we further hypothesize that this behavior is proportional to the degree to which a color is atypical, and whether color is a highly diagnostic feature in the referred-to object's identity. We investigate these expectations in two language production experiments, in which participants referred to target objects in visual contexts. In Experiment 1, we find a strong effect of color typicality: less typical colors for target objects predict higher proportions of referring expressions that include color. In Experiment 2 we manipulated objects with more complex shapes, for which color is less diagnostic, and we find that the color typicality effect is moderated by color diagnosticity: it is strongest for high-color-diagnostic objects (i.e., objects with a simple shape. These results suggest that the production of atypical color attributes results from a contrast with stored knowledge, an effect which is stronger when color is more central to object identification. Our findings offer evidence for models of reference production that incorporate general object knowledge, in order to be able to capture these effects of typicality on determining the content of referring expressions.

  2. Dynamic assessment of nonlinear typical section aeroviscoelastic systems using fractional derivative-based viscoelastic model

    Science.gov (United States)

    Sales, T. P.; Marques, Flávio D.; Pereira, Daniel A.; Rade, Domingos A.

    2018-06-01

    Nonlinear aeroelastic systems are prone to the appearance of limit cycle oscillations, bifurcations, and chaos. Such problems are of increasing concern in aircraft design since there is the need to control nonlinear instabilities and improve safety margins, at the same time as aircraft are subjected to increasingly critical operational conditions. On the other hand, in spite of the fact that viscoelastic materials have already been successfully used for the attenuation of undesired vibrations in several types of mechanical systems, a small number of research works have addressed the feasibility of exploring the viscoelastic effect to improve the behavior of nonlinear aeroelastic systems. In this context, the objective of this work is to assess the influence of viscoelastic materials on the aeroelastic features of a three-degrees-of-freedom typical section with hardening structural nonlinearities. The equations of motion are derived accounting for the presence of viscoelastic materials introduced in the resilient elements associated to each degree-of-freedom. A constitutive law based on fractional derivatives is adopted, which allows the modeling of temperature-dependent viscoelastic behavior in time and frequency domains. The unsteady aerodynamic loading is calculated based on the classical linear potential theory for arbitrary airfoil motion. The aeroelastic behavior is investigated through time domain simulations, and subsequent frequency transformations, from which bifurcations are identified from diagrams of limit cycle oscillations amplitudes versus airspeed. The influence of the viscoelastic effect on the aeroelastic behavior, for different values of temperature, is also investigated. The numerical simulations show that viscoelastic damping can increase the flutter speed and reduce the amplitudes of limit cycle oscillations. These results prove the potential that viscoelastic materials have to increase aircraft components safety margins regarding aeroelastic

  3. Predictors and consequences of gender typicality: the mediating role of communality.

    Science.gov (United States)

    DiDonato, Matthew D; Berenbaum, Sheri A

    2013-04-01

    Considerable work has shown the benefits for psychological health of being gender typed (i.e., perceiving oneself in ways that are consistent with one's sex). Nevertheless, little is known about the reasons for the link. In two studies of young adults (total N = 673), we studied (1) the ways in which gender typing is predicted from gender-related interests and personal qualities, and (2) links between gender typing and adjustment (self-esteem and negative emotionality). In the first study, gender typicality was positively predicted by a variety of gender-related characteristics and by communal traits, a female-typed characteristic; gender typicality was also positively associated with adjustment. To clarify the role of communality in predicting gender typicality and its link with adjustment, we conducted a follow-up study examining both gender typicality and "university typicality." Gender typicality was again predicted by gender-related characteristics and communality, and associated with adjustment. Further, university typicality was also predicted by communality and associated with adjustment. Mediation analyses showed that feelings of communality were partly responsible for the links between gender/university typicality and adjustment. Thus, the psychological benefits suggested to accrue from gender typicality may not be specific to gender, but rather may reflect the benefits of normativity in general. These findings were discussed in relation to the broader literature on the relation between identity and adjustment.

  4. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation. A new elasto-viscoplastic constitutive model

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Ming-Song; Li, Kuo-Kuo [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China); Lin, Y.C. [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China); Central South University, Light Alloy Research Institute, Changsha (China); Chen, Jian [Changsha University of Science and Technology, School of Energy and Power Engineering, Key Laboratory of Efficient and Clean Energy Utilization, Changsha (China)

    2016-09-15

    The nonlinear unloading behavior of a typical Ni-based superalloy is investigated by hot compressive experiments with intermediate unloading-reloading cycles. The experimental results show that there are at least four types of unloading curves. However, it is found that there is no essential difference among four types of unloading curves. The variation curves of instantaneous Young's modulus with stress for all types of unloading curves include four segments, i.e., three linear elastic segments (segments I, II, and III) and one subsequent nonlinear elastic segment (segment IV). The instantaneous Young's modulus of segments I and III is approximately equal to that of reloading process, while smaller than that of segment II. In the nonlinear elastic segment, the instantaneous Young's modulus linearly decreases with the decrease in stress. In addition, the relationship between stress and strain rate can be accurately expressed by the hyperbolic sine function. This study includes two parts. In the present part, the characters of unloading curves are discussed in detail, and a new elasto-viscoplastic constitutive model is proposed to describe the nonlinear unloading behavior based on the experimental findings. While in the latter part (Chen et al. in Appl Phys A. doi:10.1007/s00339-016-0385-0, 2016), the effects of deformation temperature, strain rate, and pre-strain on the parameters of this new constitutive model are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate, and pre-strain. (orig.)

  5. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation. A new elasto-viscoplastic constitutive model

    International Nuclear Information System (INIS)

    Chen, Ming-Song; Li, Kuo-Kuo; Lin, Y.C.; Chen, Jian

    2016-01-01

    The nonlinear unloading behavior of a typical Ni-based superalloy is investigated by hot compressive experiments with intermediate unloading-reloading cycles. The experimental results show that there are at least four types of unloading curves. However, it is found that there is no essential difference among four types of unloading curves. The variation curves of instantaneous Young's modulus with stress for all types of unloading curves include four segments, i.e., three linear elastic segments (segments I, II, and III) and one subsequent nonlinear elastic segment (segment IV). The instantaneous Young's modulus of segments I and III is approximately equal to that of reloading process, while smaller than that of segment II. In the nonlinear elastic segment, the instantaneous Young's modulus linearly decreases with the decrease in stress. In addition, the relationship between stress and strain rate can be accurately expressed by the hyperbolic sine function. This study includes two parts. In the present part, the characters of unloading curves are discussed in detail, and a new elasto-viscoplastic constitutive model is proposed to describe the nonlinear unloading behavior based on the experimental findings. While in the latter part (Chen et al. in Appl Phys A. doi:10.1007/s00339-016-0385-0, 2016), the effects of deformation temperature, strain rate, and pre-strain on the parameters of this new constitutive model are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate, and pre-strain. (orig.)

  6. Examination of some assumed severe reactor accidents at the Olkiluoto nuclear power plant

    International Nuclear Information System (INIS)

    Pekkarinen, E.; Rossi, J.

    1989-02-01

    Knowledge and analysis methods of severe accidents at nuclear power plants and of subsequent response of primary system and containment have been developed in last few years to the extent that realistic source tems of the specified accident sequences can be calculated for the Finnish nuclear power plants. The objective of this investigation was to calculate the source terms of off-site consequences brought about by some selected severe accident sequences initiated by the total loss of on-site and off-site AC power at the Olkiluoto nuclear power plant. The results describing the estimated off-site health risks are expressed as conditional assuming that the accident has taken place, because the probabilities of the occurence of the accident sequences considered have not been analysed in this study. The range and probabilities of occurence of health detriments are considered by calculating consequences in different weeather conditions and taking into account the annual frequency of each weather condition and statistical population distribution. The calculational results indicate that the reactor building provides and additional holdup and deposition of radioactive substance (except coble gases) released from the containment. Furthermore, the release fractions of the core inventory to the environment of volatile fission products such as iodine, cesium and tellurium remain under 0.03. No early health effects are predicted for the surrounding population in case the assumed short-tem countermeasures are performed effectively. Acute health effects are extremely improbable even without any active countermeasure. By reducing the long-term exposure from contaminated agricultural products, the collective dose from natural long-term background radiation, for instance in the sector of 30 degrees towards the southern Finland up to the distance of 300 kilometers, would be expected to increase with 2-20 percent depending on the release considered

  7. Thermal room modelling adapted to the test of HVAC control systems; Modele de zone adapte aux essais de regulateurs de systemes de chauffage et de climatisation

    Energy Technology Data Exchange (ETDEWEB)

    Riederer, P.

    2002-01-15

    Room models, currently used for controller tests, assume the room air to be perfectly mixed. A new room model is developed, assuming non-homogeneous room conditions and distinguishing between different sensor positions. From measurement in real test rooms and detailed CFD simulations, a list of convective phenomena is obtained that has to be considered in the development of a model for a room equipped with different HVAC systems. The zonal modelling approach that divides the room air into several sub-volumes is chosen, since it is able to represent the important convective phenomena imposed on the HVAC system. The convective room model is divided into two parts: a zonal model, representing the air at the occupant zone and a second model, providing the conditions at typical sensor positions. Using this approach, the comfort conditions at the occupant zone can be evaluated as well as the impact of different sensor positions. The model is validated for a test room equipped with different HVAC systems. Sensitivity analysis is carried out on the main parameters of the model. Performance assessment and energy consumption are then compared for different sensor positions in a room equipped with different HVAC systems. The results are also compared with those obtained when a well-mixed model is used. A main conclusion of these tests is, that the differences obtained, when changing the position of the controller's sensor, is a function of the HVAC system and controller type. The differences are generally small in terms of thermal comfort but significant in terms of overall energy consumption. For different HVAC systems the cases are listed, in which the use of a simplified model is not recommended. (author)

  8. The use of genetic algorithms to model protoplanetary discs

    Science.gov (United States)

    Hetem, Annibal; Gregorio-Hetem, Jane

    2007-12-01

    The protoplanetary discs of T Tauri and Herbig Ae/Be stars have previously been studied using geometric disc models to fit their spectral energy distribution (SED). The simulations provide a means to reproduce the signatures of various circumstellar structures, which are related to different levels of infrared excess. With the aim of improving our previous model, which assumed a simple flat-disc configuration, we adopt here a reprocessing flared-disc model that assumes hydrostatic, radiative equilibrium. We have developed a method to optimize the parameter estimation based on genetic algorithms (GAs). This paper describes the implementation of the new code, which has been applied to Herbig stars from the Pico dos Dias Survey catalogue, in order to illustrate the quality of the fitting for a variety of SED shapes. The star AB Aur was used as a test of the GA parameter estimation, and demonstrates that the new code reproduces successfully a canonical example of the flared-disc model. The GA method gives a good quality of fit, but the range of input parameters must be chosen with caution, as unrealistic disc parameters can be derived. It is confirmed that the flared-disc model fits the flattened SEDs typical of Herbig stars; however, embedded objects (increasing SED slope) and debris discs (steeply decreasing SED slope) are not well fitted with this configuration. Even considering the limitation of the derived parameters, the automatic process of SED fitting provides an interesting tool for the statistical analysis of the circumstellar luminosity of large samples of young stars.

  9. 42 CFR 137.286 - Do Self-Governance Tribes become Federal agencies when they assume these Federal environmental...

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects in...

  10. The Modelling of Axially Translating Flexible Beams

    Science.gov (United States)

    Theodore, R. J.; Arakeri, J. H.; Ghosal, A.

    1996-04-01

    The axially translating flexible beam with a prismatic joint can be modelled by using the Euler-Bernoulli beam equation together with the convective terms. In general, the method of separation of variables cannot be applied to solve this partial differential equation. In this paper, a non-dimensional form of the Euler Bernoulli beam equation is presented, obtained by using the concept of group velocity, and also the conditions under which separation of variables and assumed modes method can be used. The use of clamped-mass boundary conditions leads to a time-dependent frequency equation for the translating flexible beam. A novel method is presented for solving this time dependent frequency equation by using a differential form of the frequency equation. The assume mode/Lagrangian formulation of dynamics is employed to derive closed form equations of motion. It is shown by using Lyapunov's first method that the dynamic responses of flexural modal variables become unstable during retraction of the flexible beam, which the dynamic response during extension of the beam is stable. Numerical simulation results are presented for the uniform axial motion induced transverse vibration for a typical flexible beam.

  11. A Hybrid Method for Generation of Typical Meteorological Years for Different Climates of China

    Directory of Open Access Journals (Sweden)

    Haixiang Zang

    2016-12-01

    Full Text Available Since a representative dataset of the climatological features of a location is important for calculations relating to many fields, such as solar energy system, agriculture, meteorology and architecture, there is a need to investigate the methodology for generating a typical meteorological year (TMY. In this paper, a hybrid method with mixed treatment of selected results from the Danish method, the Festa-Ratto method, and the modified typical meteorological year method is proposed to determine typical meteorological years for 35 locations in six different climatic zones of China (Tropical Zone, Subtropical Zone, Warm Temperate Zone, Mid Temperate Zone, Cold Temperate Zone and Tibetan Plateau Zone. Measured weather data (air dry-bulb temperature, air relative humidity, wind speed, pressure, sunshine duration and global solar radiation, which cover the period of 1994–2015, are obtained and applied in the process of forming TMY. The TMY data and typical solar radiation data are investigated and analyzed in this study. It is found that the results of the hybrid method have better performance in terms of the long-term average measured data during the year than the other investigated methods. Moreover, the Gaussian process regression (GPR model is recommended to forecast the monthly mean solar radiation using the last 22 years (1994–2015 of measured data.

  12. Identifying Typical Movements Among Indoor Objects

    DEFF Research Database (Denmark)

    Radaelli, Laura; Sabonis, Dovydas; Lu, Hua

    2013-01-01

    With the proliferation of mobile computing, positioning systems are becoming available that enable indoor location-based services. As a result, indoor tracking data is also becoming available. This paper puts focus on one use of such data, namely the identification of typical movement patterns...

  13. Comparison of Meteoroid Flux Models for Near Earth Space

    Science.gov (United States)

    Drolshagen, G.; Liou, J.-C.; Dikarev, V.; Landgraf, M.; Krag, H.; Kuiper, W.

    2007-01-01

    Over the last decade several new models for the sporadic interplanetary meteoroid flux have been developed. These include the Meteoroid Engineering Model (MEM), the Divine-Staubach model and the Interplanetary Meteoroid Engineering Model (IMEM). They typically cover mass ranges from 10-12 g (or lower) to 1 g and are applicable for model specific sun distance ranges between 0.2 A.U. and 10 A.U. Near 1 A.U. averaged fluxes (over direction and velocities) for all these models are tuned to the well established interplanetary model by Gr?n et. al. However, in many respects these models differ considerably. Examples are the velocity and directional distributions and the assumed meteoroid sources. In this paper flux predictions by the various models to Earth orbiting spacecraft are compared. Main differences are presented and analysed. The persisting differences even for near Earth space can be seen as surprising in view of the numerous ground based (optical, radar) and in-situ (captured IDPs, in-situ detectors and analysis of retrieved hardware) measurements and simulations. Remaining uncertainties and potential additional studies to overcome the existing model discrepancies are discussed.

  14. Herpes zoster - typical and atypical presentations.

    Science.gov (United States)

    Dayan, Roy Rafael; Peleg, Roni

    2017-08-01

    Varicella- zoster virus infection is an intriguing medical entity that involves many medical specialties including infectious diseases, immunology, dermatology, and neurology. It can affect patients from early childhood to old age. Its treatment requires expertise in pain management and psychological support. While varicella is caused by acute viremia, herpes zoster occurs after the dormant viral infection, involving the cranial nerve or sensory root ganglia, is re-activated and spreads orthodromically from the ganglion, via the sensory nerve root, to the innervated target tissue (skin, cornea, auditory canal, etc.). Typically, a single dermatome is involved, although two or three adjacent dermatomes may be affected. The lesions usually do not cross the midline. Herpes zoster can also present with unique or atypical clinical manifestations, such as glioma, zoster sine herpete and bilateral herpes zoster, which can be a challenging diagnosis even for experienced physicians. We discuss the epidemiology, pathophysiology, diagnosis and management of Herpes Zoster, typical and atypical presentations.

  15. Fitting and interpreting continuous-time latent Markov models for panel data.

    Science.gov (United States)

    Lange, Jane M; Minin, Vladimir N

    2013-11-20

    Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Biosphere modelling for a deep radioactive waste repository: site-specific consideration of the groundwater-soil pathway

    International Nuclear Information System (INIS)

    Grogan, H.A.; Baeyens, B.; Mueller, H.; Dorp, F. van

    1991-07-01

    Scenario evaluations indicate that groundwater is the most probable pathway for released radionuclides to reach the biosphere from a deep underground nuclear waste repository. This report considers a small valley in northern Switzerland where the transport of groundwater to surface soil might be possible. The hydrological situation has been examined to allow a system of compartments and fluxes for modelling this pathway with respect to the release of radionuclides from an underground repository to be produced. Assuming present day conditions the best estimate surface soil concentrations are calculated by dividing the soil into two layers (deep soil, surface soil) and assuming an annual upward flux of 10 mm from the groundwater through the two soil layers. A constant unit activity concentration is assumed for the radionuclides in the groundwater. It is concluded that the resultant best estimate values must still be considered to be biased on the conservative side, in view of the fact that the more typical situation is likely to be that no groundwater reaches the surface soil. Upper and lower estimates for the surface soil radionuclide concentrations are based on the parameter perturbation results which were carried out for three key parameters, i.e. precipitation surplus, upward flux and solid-liquid distribution coefficients (K d ). It is noted that attention must be given to the functional relationships which exist between various model parameters. Upper estimates for the surface soil concentration are determined assuming a higher annual upward flux (100 mm) as well as a more conservative K d value compared with the base case. This gives rise to surface soil concentrations more than two orders of magnitude higher than the best estimate values. The lower estimated are more easily assigned assuming that no activity reaches the surface soil via this pathway. (author) 18 figs., 4 tabs., refs

  17. The influence of climatic changes on distribution pattern of six typical Kobresia species in Tibetan Plateau based on MaxEnt model and geographic information system

    Science.gov (United States)

    Hu, Zhongjun; Guo, Ke; Jin, Shulan; Pan, Huahua

    2018-01-01

    The issue that climatic change has great influence on species distribution is currently of great interest in field of biogeography. Six typical Kobresia species are selected from alpine grassland of Tibetan Plateau (TP) as research objects which are the high-quality forage for local husbandry, and their distribution changes are modeled in four periods by using MaxEnt model and GIS technology. The modeling results have shown that the distribution of these six typical Kobresia species in TP was strongly affected by two factors of "the annual precipitation" and "the precipitation in the wettest and driest quarters of the year". The modeling results have also shown that the most suitable habitats of K. pygmeae were located in the area around Qinghai Lake, the Hengduan-Himalayan mountain area, and the hinterland of TP. The most suitable habitats of K. humilis were mainly located in the area around Qinghai Lake and the hinterland of TP during the Last Interglacial period, and gradually merged into a bigger area; K. robusta and K. tibetica were located in the area around Qinghai Lake and the hinterland of TP, but they did not integrate into one area all the time, and K. capillifolia were located in the area around Qinghai Lake and extended to the southwest of the original distributing area, whereas K. macrantha were mainly distributed along the area of the Himalayan mountain chain, which had the smallest distribution area among them, and all these six Kobresia species can be divided into four types of "retreat/expansion" styles according to the changes of suitable habitat areas during the four periods; all these change styles are the result of long-term adaptations of the different species to the local climate changes in regions of TP and show the complexity of relationships between different species and climate. The research results have positive reference value to the protection of species diversity and sustainable development of the local husbandry in TP.

  18. A Context-Aware Model to Provide Positioning in Disaster Relief Scenarios

    Directory of Open Access Journals (Sweden)

    Daniel Moreno

    2015-09-01

    Full Text Available The effectiveness of the work performed during disaster relief efforts is highly dependent on the coordination of activities conducted by the first responders deployed in the affected area. Such coordination, in turn, depends on an appropriate management of geo-referenced information. Therefore, enabling first responders to count on positioning capabilities during these activities is vital to increase the effectiveness of the response process. The positioning methods used in this scenario must assume a lack of infrastructure-based communication and electrical energy, which usually characterizes affected areas. Although positioning systems such as the Global Positioning System (GPS have been shown to be useful, we cannot assume that all devices deployed in the area (or most of them will have positioning capabilities by themselves. Typically, many first responders carry devices that are not capable of performing positioning on their own, but that require such a service. In order to help increase the positioning capability of first responders in disaster-affected areas, this paper presents a context-aware positioning model that allows mobile devices to estimate their position based on information gathered from their surroundings. The performance of the proposed model was evaluated using simulations, and the obtained results show that mobile devices without positioning capabilities were able to use the model to estimate their position. Moreover, the accuracy of the positioning model has been shown to be suitable for conducting most first response activities.

  19. Building a better methane generation model: Validating models with methane recovery rates from 35 Canadian landfills.

    Science.gov (United States)

    Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E

    2009-07-01

    The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.

  20. Quantifying relative importance: Computing standardized effects in models with binary outcomes

    Science.gov (United States)

    Grace, James B.; Johnson, Darren; Lefcheck, Jonathan S.; Byrnes, Jarrett E.K.

    2018-01-01

    Scientists commonly ask questions about the relative importances of processes, and then turn to statistical models for answers. Standardized coefficients are typically used in such situations, with the goal being to compare effects on a common scale. Traditional approaches to obtaining standardized coefficients were developed with idealized Gaussian variables in mind. When responses are binary, complications arise that impact standardization methods. In this paper, we review, evaluate, and propose new methods for standardizing coefficients from models that contain binary outcomes. We first consider the interpretability of unstandardized coefficients and then examine two main approaches to standardization. One approach, which we refer to as the Latent-Theoretical or LT method, assumes that underlying binary observations there exists a latent, continuous propensity linearly related to the coefficients. A second approach, which we refer to as the Observed-Empirical or OE method, assumes responses are purely discrete and estimates error variance empirically via reference to a classical R2 estimator. We also evaluate the standard formula for calculating standardized coefficients based on standard deviations. Criticisms of this practice have been persistent, leading us to propose an alternative formula that is based on user-defined “relevant ranges”. Finally, we implement all of the above in an open-source package for the statistical software R.

  1. For Your Local Eyes Only: Culture-Specific Face Typicality Influences Perceptions of Trustworthiness.

    Science.gov (United States)

    Sofer, Carmel; Dotsch, Ron; Oikawa, Masanori; Oikawa, Haruka; Wigboldus, Daniel H J; Todorov, Alexander

    2017-08-01

    Recent findings show that typical faces are judged as more trustworthy than atypical faces. However, it is not clear whether employment of typicality cues in trustworthiness judgment happens across cultures and if these cues are culture specific. In two studies, conducted in Japan and Israel, participants judged trustworthiness and attractiveness of faces. In Study 1, faces varied along a cross-cultural dimension ranging from a Japanese to an Israeli typical face. Own-culture typical faces were perceived as more trustworthy than other-culture typical faces, suggesting that people in both cultures employ typicality cues when judging trustworthiness, but that the cues, indicative of typicality, are culture dependent. Because perceivers may be less familiar with other-culture typicality cues, Study 2 tested the extent to which they rely on available facial information other than typicality, when judging other-culture faces. In Study 2, Japanese and Israeli faces varied from either Japanese or Israeli attractive to unattractive with the respective typical face at the midpoint. For own-culture faces, trustworthiness judgments peaked around own-culture typical face. However, when judging other-culture faces, both cultures also employed attractiveness cues, but this effect was more apparent for Japanese participants. Our findings highlight the importance of culture when considering the effect of typicality on trustworthiness judgments.

  2. Research on Fuel Consumption of Hybrid Bulldozer under Typical Duty Cycle

    Science.gov (United States)

    Song, Qiang; Wang, Wen-Jun; Jia, Chao; Yao, You-Liang; Wang, Sheng-Bo

    The hybrid drive bulldozer adopts a dual-motor independent drive system with engine-generator assembly as its power source. The mathematical model of the whole system is constructed on the software platform of MATLAB/Simulink. And then according to the velocity data gained from a real test experiment, a typical duty cycle is build up. Finally the fuel consumption of the bulldozer is calculated under this duty-cycle. Simulation results show that, compared with the traditional mechanical one, the hybrid electric drive system can save fuel up to 16% and therefore indicates great potential for lifting up fuel economy.

  3. Comparison of Geometrical Layouts for a Multi-Box Aerosol Model from a Single-Chamber Dispersion Study

    Directory of Open Access Journals (Sweden)

    Alexander C. Ø. Jensen

    2018-04-01

    Full Text Available Models are increasingly used to estimate and pre-emptively calculate the occupational exposure of airborne released particulate matter. Typical two-box models assume instant and fully mixed air volumes, which can potentially cause issues in cases with fast processes, slow air mixing, and/or large volumes. In this study, we present an aerosol dispersion model and validate it by comparing the modelled concentrations with concentrations measured during chamber experiments. We investigated whether a better estimation of concentrations was possible by using different geometrical layouts rather than a typical two-box layout. A one-box, two-box, and two three-box layouts were used. The one box model was found to underestimate the concentrations close to the source, while overestimating the concentrations in the far field. The two-box model layout performed well based on comparisons from the chamber study in systems with a steady source concentration for both slow and fast mixing. The three-box layout was found to better estimate the concentrations and the timing of the peaks for fluctuating concentrations than the one-box or two-box layouts under relatively slow mixing conditions. This finding suggests that industry-relevant scaled volumes should be tested in practice to gain more knowledge about when to use the two-box or the three-box layout schemes for multi-box models.

  4. Fractional Gaussian noise: Prior specification and model comparison

    KAUST Repository

    Sørbye, Sigrunn Holbek

    2017-07-07

    Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.

  5. Fractional Gaussian noise: Prior specification and model comparison

    KAUST Repository

    Sø rbye, Sigrunn Holbek; Rue, Haavard

    2017-01-01

    Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.

  6. The memory state heuristic: A formal model based on repeated recognition judgments.

    Science.gov (United States)

    Castela, Marta; Erdfelder, Edgar

    2017-02-01

    The recognition heuristic (RH) theory predicts that, in comparative judgment tasks, if one object is recognized and the other is not, the recognized one is chosen. The memory-state heuristic (MSH) extends the RH by assuming that choices are not affected by recognition judgments per se, but by the memory states underlying these judgments (i.e., recognition certainty, uncertainty, or rejection certainty). Specifically, the larger the discrepancy between memory states, the larger the probability of choosing the object in the higher state. The typical RH paradigm does not allow estimation of the underlying memory states because it is unknown whether the objects were previously experienced or not. Therefore, we extended the paradigm by repeating the recognition task twice. In line with high threshold models of recognition, we assumed that inconsistent recognition judgments result from uncertainty whereas consistent judgments most likely result from memory certainty. In Experiment 1, we fitted 2 nested multinomial models to the data: an MSH model that formalizes the relation between memory states and binary choices explicitly and an approximate model that ignores the (unlikely) possibility of consistent guesses. Both models provided converging results. As predicted, reliance on recognition increased with the discrepancy in the underlying memory states. In Experiment 2, we replicated these results and found support for choice consistency predictions of the MSH. Additionally, recognition and choice latencies were in agreement with the MSH in both experiments. Finally, we validated critical parameters of our MSH model through a cross-validation method and a third experiment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. A model of vulcanian explosions

    International Nuclear Information System (INIS)

    Woods, A.W.

    1995-01-01

    We present a model of the initial stages of the explosive eruption of magma from a volcanic conduit as occurs in Vulcanian style eruptions. We assume there is a volatile rich (1-10 wt%) mixture of magma, vaporised groundwater and exsolved volatiles, trapped at high pressure (1-100 atm) just below a plug in a volcanic conduit. If the plug disrupts, there is an explosive eruption in which a rarefaction wave propagates into the conduit allowing the volatile rich mixture to expand and discharge into the atmosphere ahead of the vent. Typically, the explosions are so rapid that coarse grained ejecta (>0.5 mm) do not remain in thermal equilibrium with the gas, and this leads to significantly lower velocities and temperatures than predicted by an equilibrium model. Material may erupt from the vent at speeds of 100-400 m s -1 with an initial mass flux of order 10 7 -10 9 kg s -1 , consistent with video observations of eruptions and measurements of the ballistic dispersal of large clasts. (orig.)

  8. Mechanisms of chemical vapor generation by aqueous tetrahydridoborate. Recent developments toward the definition of a more general reaction model

    Science.gov (United States)

    D'Ulivo, Alessandro

    2016-05-01

    A reaction model describing the reactivity of metal and semimetal species with aqueous tetrahydridoborate (THB) has been drawn taking into account the mechanism of chemical vapor generation (CVG) of hydrides, recent evidences on the mechanism of interference and formation of byproducts in arsane generation, and other evidences in the field of the synthesis of nanoparticles and catalytic hydrolysis of THB by metal nanoparticles. The new "non-analytical" reaction model is of more general validity than the previously described "analytical" reaction model for CVG. The non-analytical model is valid for reaction of a single analyte with THB and for conditions approaching those typically encountered in the synthesis of nanoparticles and macroprecipitates. It reduces to the previously proposed analytical model under conditions typically employed in CVG for trace analysis (analyte below the μM level, borane/analyte ≫ 103 mol/mol, no interference). The non-analytical reaction model is not able to explain all the interference effects observed in CVG, which can be achieved only by assuming the interaction among the species of reaction pathways of different analytical substrates. The reunification of CVG, the synthesis of nanoparticles by aqueous THB and the catalytic hydrolysis of THB inside a common frame contribute to rationalization of the complex reactivity of aqueous THB with metal and semimetal species.

  9. Spectra of conditionalization and typicality in the multiverse

    Science.gov (United States)

    Azhar, Feraz

    2016-02-01

    An approach to testing theories describing a multiverse, that has gained interest of late, involves comparing theory-generated probability distributions over observables with their experimentally measured values. It is likely that such distributions, were we indeed able to calculate them unambiguously, will assign low probabilities to any such experimental measurements. An alternative to thereby rejecting these theories, is to conditionalize the distributions involved by restricting attention to domains of the multiverse in which we might arise. In order to elicit a crisp prediction, however, one needs to make a further assumption about how typical we are of the chosen domains. In this paper, we investigate interactions between the spectra of available assumptions regarding both conditionalization and typicality, and draw out the effects of these interactions in a concrete setting; namely, on predictions of the total number of species that contribute significantly to dark matter. In particular, for each conditionalization scheme studied, we analyze how correlations between densities of different dark matter species affect the prediction, and explicate the effects of assumptions regarding typicality. We find that the effects of correlations can depend on the conditionalization scheme, and that in each case atypicality can significantly change the prediction. In doing so, we demonstrate the existence of overlaps in the predictions of different "frameworks" consisting of conjunctions of theory, conditionalization scheme and typicality assumption. This conclusion highlights the acute challenges involved in using such tests to identify a preferred framework that aims to describe our observational situation in a multiverse.

  10. Random incidence absorption coefficients of porous absorbers based on local and extended reaction models

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2011-01-01

    resistivity and the absorber thickness on the difference between the two surface reaction models are examined and discussed. For a porous absorber backed by a rigid surface, the local reaction models give errors of less than 10% if the thickness exceeds 120 mm for a flow resistivity of 5000 Nm-4s. As the flow...... incidence acoustical characteristics of typical building elements made of porous materials assuming extended and local reaction. For each surface reaction, five well-established wave propagation models, the Delany-Bazley, Miki, Beranek, Allard-Champoux, and Biot model, are employed. Effects of the flow...... resistivity doubles, a decrease in the required thickness by 25 mm is observed to achieve the same amount of error. For an absorber backed by an air gap, the thickness ratio between the material and air cavity is important. If the absorber thickness is approximately 40% of the cavity depth, the local reaction...

  11. Food and Wine Tourism: an Analysis of Italian Typical Products

    Directory of Open Access Journals (Sweden)

    Francesco Maria Olivieri

    2015-06-01

    Full Text Available The aim of this work is to focus the specific role of local food productions in spite of its relationship with tourism sector to valorization and promotion of the territorial cultural heritage. The modern agriculture has been and, in the recent years, several specific features are emerging referring to different territorials areas. Tourist would like to have a complete experience consumption of a destination, specifically to natural and cultural heritage and genuine food. This contribute addresses the topics connected to the relationship between typical productions system and tourism sector to underline the competitive advantages to local development. The typical productions are Designation of Protected Origin (Italian DOP, within wine certifications DOCG and DOC and Typical Geographical Indication (IGP and wine’s IGT. The aim is an analysis of the specialization of these kinds of production at Italian regional scale. The implication of the work has connected with defining a necessary and appropriate value strategies based on marketing principles in order to translate the benefit of typical productions to additional value for the local system. Thus, the final part of the paper describes the potential dynamics with the suitable accommodation typology of agriturismo and the typical production system of Italian Administrative Regions.

  12. Suggestion of typical phases of in-vessel fuel-debris by thermodynamic calculation for decommissioning technology of Fukushima-Daiichi nuclear power station

    Energy Technology Data Exchange (ETDEWEB)

    Ikeuchi, Hirotomo; Yano, Kimihiko; Kaji, Naoya; Washiya, Tadahiro [Japan Atomic Energy Agency, 4-33 Muramatsu, Tokai-mura, Ibaraki-ken, 319-1194 (Japan); Kondo, Yoshikazu; Noguchi, Yoshikazu [PESCO Co.Ltd. (Korea, Republic of)

    2013-07-01

    For the decommissioning of the Fukushima-Daiichi Nuclear Power Station (1F), the characterization of fuel-debris in cores of Units 1-3 is necessary. In this study, typical phases of the in-vessel fuel-debris were estimated using a thermodynamic equilibrium (TDE) calculation. The FactSage program and NUCLEA database were applied to estimate the phase equilibria of debris. It was confirmed that the TDE calculation using the database can reproduce the phase separation behavior of debris observed in the Three Mile Island accident. In the TDE calculation of 1F, the oxygen potential [G(O{sub 2})] was assumed to be a variable. At low G(O{sub 2}) where metallic zirconium remains, (U,Zr)O{sub 2}, UO{sub 2}, and ZrO{sub 2} were found as oxides, and oxygen-dispersed Zr, Fe{sub 2}(Zr,U), and Fe{sub 3}UZr{sub 2} were found as metals. With an increase in zirconium oxidation, the mass of those metals, especially Fe{sub 3}UZr{sub 2}, decreased, but the other phases of metals hardly changed qualitatively. Consequently, (U,Zr)O{sub 2} is suggested as a typical phase of oxide, and Fe{sub 2}(Zr,U) is suggested as that of metal. However, a more detailed estimation is necessary to consider the distribution of Fe in the reactor pressure vessel through core-melt progression. (authors)

  13. Typical Relaxation of Isolated Many-Body Systems Which Do Not Thermalize

    Science.gov (United States)

    Balz, Ben N.; Reimann, Peter

    2017-05-01

    We consider isolated many-body quantum systems which do not thermalize; i.e., expectation values approach an (approximately) steady longtime limit which disagrees with the microcanonical prediction of equilibrium statistical mechanics. A general analytical theory is worked out for the typical temporal relaxation behavior in such cases. The main prerequisites are initial conditions which appreciably populate many energy levels and do not give rise to significant spatial inhomogeneities on macroscopic scales. The theory explains very well the experimental and numerical findings in a trapped-ion quantum simulator exhibiting many-body localization, in ultracold atomic gases, and in integrable hard-core boson and X X Z models.

  14. Mother-Child Play: Children with Down Syndrome and Typical Development

    Science.gov (United States)

    Venuti, P.; de Falco, S.; Esposito, G.; Bornstein, Marc H.

    2009-01-01

    Child solitary and collaborative mother-child play with 21 children with Down syndrome and 33 mental-age-matched typically developing children were compared. In solitary play, children with Down syndrome showed less exploratory but similar symbolic play compared to typically developing children. From solitary to collaborative play, children with…

  15. Narrative versus Style : Effect of Genre Typical Events versus Genre Typical Filmic Realizations on Film Viewers' Genre Recognition

    NARCIS (Netherlands)

    Visch, V.; Tan, E.

    2008-01-01

    This study investigated whether film viewers recognize four basic genres (comic, drama, action and nonfiction) on the basis of genre-typical event cues or of genretypical filmic realization cues of events. Event cues are similar to the narrative content of a film sequence, while filmic realization

  16. LOCA assessment experiments in a full-elevation, CANDU-typical test facility

    International Nuclear Information System (INIS)

    Ingham, P.J.; McGee, G.R.; Krishnan, V.S.

    1990-01-01

    The RD-14 thermal-hydraulics test facility, located at the Whiteshell Nuclear Research Establishment, is a full-elevation model representative of a CANDU primary heat transport system. The facility is scaled to accommodate a single, full-scale (5.0 MW, 21 kg/s), electrically heated channel per pass. The steam generators, pumps, headers, feeders and heated channels are arranged in a typical CANDU figure-of-eight geometry. The loop has an emergency coolant injection system (ECI) that may be operated in several modes, including typical features of the various ECI systems found in CANDU reactors. A series of experiments has been performed in RD-14 to investigate the thermal-hydraulic behaviour during the blowdown and injection phases of a loss-of-coolant accident (LOCA). The tests were designed to cover a full range of break sizes from feeder-sized breaks to guillotine breaks in either an inlet or an outlet header. Breaks resulting in channel flow stagnation were also investigated. This paper reviews the results of some of the LOCA tests carried out in RD-14, and discusses some of the behaviour observed. Plans for future experiments in a multiple-channel RD-14 facility, modified to contain multiple flow channels, are outlined. (orig.)

  17. Typical electric bills, January 1, 1981

    International Nuclear Information System (INIS)

    1981-01-01

    The Typical Electric Bills report is prepared by the Electric Power Division; Office of Coal, Nuclear, Electric and Alternate Fuels; Energy Information Administration; Department of Energy. The publication is geared to a variety of applications by electric utilities, industry, consumes, educational institutions, and government in recognition of the growing importance of energy planning in contemporary society. 19 figs., 18 tabs

  18. Development of cortical asymmetry in typically developing children and its disruption in attention-deficit/hyperactivity disorder.

    Science.gov (United States)

    Shaw, Philip; Lalonde, Francois; Lepage, Claude; Rabin, Cara; Eckstrand, Kristen; Sharp, Wendy; Greenstein, Deanna; Evans, Alan; Giedd, J N; Rapoport, Judith

    2009-08-01

    Just as typical development of anatomical asymmetries in the human brain has been linked with normal lateralization of motor and cognitive functions, disruption of asymmetry has been implicated in the pathogenesis of neurodevelopmental disorders such as attention-deficit/hyperactivity disorder (ADHD). No study has examined the development of cortical asymmetry using longitudinal neuroanatomical data. To delineate the development of cortical asymmetry in children with and without ADHD. Longitudinal study. Government Clinical Research Institute. A total of 218 children with ADHD and 358 typically developing children, from whom 1133 neuroanatomical magnetic resonance images were acquired prospectively. Cortical thickness was estimated at 40 962 homologous points in the left and right hemispheres, and the trajectory of change in asymmetry was defined using mixed-model regression. In right-handed typically developing individuals, a mean (SE) increase in the relative thickness of the right orbitofrontal and inferior frontal cortex with age of 0.011 (0.0018) mm per year (t(337) = 6.2, P left-hemispheric increase in the occipital cortical regions of 0.013 (0.0015) mm per year (t(337) = 8.1, P right-handed typically developing individuals was less extensive and was localized to different cortical regions. In ADHD, the posterior component of this evolving asymmetry was intact, but the prefrontal component was lost. These findings explain the way that, in typical development, the increased dimensions of the right frontal and left occipital cortical regions emerge in adulthood from the reversed pattern of childhood cortical asymmetries. Loss of the prefrontal component of this evolving asymmetry in ADHD is compatible with disruption of prefrontal function in the disorder and demonstrates the way that disruption of typical processes of asymmetry can inform our understanding of neurodevelopmental disorders.

  19. A viscoplastic shear-zone model for episodic slow slip events in oceanic subduction zones

    Science.gov (United States)

    Yin, A.; Meng, L.

    2016-12-01

    Episodic slow slip events occur widely along oceanic subduction zones at the brittle-ductile transition depths ( 20-50 km). Although efforts have been devoted to unravel their mechanical origins, it remains unclear about the physical controls on the wide range of their recurrence intervals and slip durations. In this study we present a simple mechanical model that attempts to account for the observed temporal evolution of slow slip events. In our model we assume that slow slip events occur in a viscoplastic shear zone (i.e., Bingham material), which has an upper static and a lower dynamic plastic yield strength. We further assume that the hanging wall deformation is approximated as an elastic spring. We envision the shear zone to be initially locked during forward/landward motion but is subsequently unlocked when the elastic and gravity-induced stress exceeds the static yield strength of the shear zone. This leads to backward/trenchward motion damped by viscous shear-zone deformation. As the elastic spring progressively loosens, the hanging wall velocity evolves with time and the viscous shear stress eventually reaches the dynamic yield strength. This is followed by the termination of the trenchward motion when the elastic stress is balanced by the dynamic yield strength of the shear zone and the gravity. In order to account for the zig-saw slip-history pattern of typical repeated slow slip events, we assume that the shear zone progressively strengthens after each slow slip cycle, possibly caused by dilatancy as commonly assumed or by progressive fault healing through solution-transport mechanisms. We quantify our conceptual model by obtaining simple analytical solutions. Our model results suggest that the duration of the landward motion increases with the down-dip length and the static yield strength of the shear zone, but decreases with the ambient loading velocity and the elastic modulus of the hanging wall. The duration of the backward/trenchward motion depends

  20. A Typical Synergy

    Science.gov (United States)

    van Noort, Thomas; Achten, Peter; Plasmeijer, Rinus

    We present a typical synergy between dynamic types (dynamics) and generalised algebraic datatypes (GADTs). The former provides a clean approach to integrating dynamic typing in a statically typed language. It allows values to be wrapped together with their type in a uniform package, deferring type unification until run time using a pattern match annotated with the desired type. The latter allows for the explicit specification of constructor types, as to enforce their structural validity. In contrast to ADTs, GADTs are heterogeneous structures since each constructor type is implicitly universally quantified. Unfortunately, pattern matching only enforces structural validity and does not provide instantiation information on polymorphic types. Consequently, functions that manipulate such values, such as a type-safe update function, are cumbersome due to boilerplate type representation administration. In this paper we focus on improving such functions by providing a new GADT annotation via a natural synergy with dynamics. We formally define the semantics of the annotation and touch on novel other applications of this technique such as type dispatching and enforcing type equality invariants on GADT values.

  1. Empirical models for predicting wind potential for wind energy applications in rural locations of Nigeria

    Energy Technology Data Exchange (ETDEWEB)

    Odo, F.C. [National Centre for Energy Research and Development, University of Nigeria, Nsukka (Nigeria); Department of Physics and Astronomy, University of Nigeria, Nsukka (Nigeria); Akubue, G.U.; Offiah, S.U.; Ugwuoke, P.E. [National Centre for Energy Research and Development, University of Nigeria, Nsukka (Nigeria)

    2013-07-01

    In this paper, we use the correlation between the average wind speed and ambient temperature to develop models for predicting wind potentials for two Nigerian locations. Assuming that the troposphere is a typical heterogeneous mixture of ideal gases, we find that for the studied locations, wind speed clearly correlates with ambient temperature in a simple polynomial of 3rd degree. The coefficient of determination and root-mean-square error of the models are 0.81; 0.0024 and 0.56; 0.0041, respectively, for Enugu (6.40N; 7.50E) and Owerri (5.50N; 7.00E). These results suggest that the temperature-based model can be used, with acceptable accuracy, in predicting wind potentials needed for preliminary design assessment of wind energy conversion devices for the locations and others with similar meteorological conditions.

  2. Pressure balance inconsistency exhibited in a statistical model of magnetospheric plasma

    Science.gov (United States)

    Garner, T. W.; Wolf, R. A.; Spiro, R. W.; Thomsen, M. F.; Korth, H.

    2003-08-01

    While quantitative theories of plasma flow from the magnetotail to the inner magnetosphere typically assume adiabatic convection, it has long been understood that these convection models tend to overestimate the plasma pressure in the inner magnetosphere. This phenomenon is called the pressure crisis or the pressure balance inconsistency. In order to analyze it in a new and more detailed manner we utilize an empirical model of the proton and electron distribution functions in the near-Earth plasma sheet (-50 RE attributed to gradient/curvature drift for large isotropic energy invariants but not for small invariants. The tailward gradient of the distribution function indicates a violation of the adiabatic drift condition in the plasma sheet. It also confirms the existence of a "number crisis" in addition to the pressure crisis. In addition, plasma sheet pressure gradients, when crossed with the gradient of flux tube volume computed from the [1989] magnetic field model, indicate Region 1 currents on the dawn and dusk sides of the outer plasma sheet.

  3. A multi-layer box model of carbon dynamics in soil

    International Nuclear Information System (INIS)

    Kuc, T.

    2005-01-01

    A multi-layer box model (MLB) for quantification of carbon fluxes between soil and atmosphere has been developed. In the model, soil carbon reservoir is represented by two boxes: fast decomposition box (FDB) and slow decomposition box (SDB), characterised by substantially different turnover time (TT) of carbon compounds. Each box has an internal structure (sub-compartments) accounting for carbon deposited in consecutive time intervals. The rate of decomposition of carbon compounds in each sub-compartment is proportional to the carbon content. With the aid of the MLB model and the 14 C signature of carbon dioxide, the fluxes entering and leaving the boxes, turnover time of carbon in each box, and the ratio of mass of carbon in the slow and fast box (M s /M f ) were calculated. The MBL model yields the turnover time of carbon in the FDB (TT f ) ca. 14 for typical investigated soils of temperate climate ecosystems. The calculated contribution of the CO 2 flux originating from the slow box (F s ) to the total CO 2 flux into the atmosphere ranges from 12% to 22%. These values are in agreement with experimental observations at different locations. Assuming that the input flux of carbon (F i n) to the soil system is doubled within the period of 100 years, the soil buffering capacity for excess carbon predicted by the MLB model for typical soil parameters may vary in the range between 26% and 52%. The highest values are obtained for soils characterised by long TTf, and well developed old carbon pool. (author)

  4. Shared temporoparietal dysfunction in dyslexia and typical readers with discrepantly high IQ.

    Science.gov (United States)

    Hancock, Roeland; Gabrieli, John D E; Hoeft, Fumiko

    2016-12-01

    It is currently believed that reading disability (RD) should be defined by reading level without regard to broader aptitude (IQ). There is debate, however, about how to classify individuals who read in the typical range but less well than would be expected by their higher IQ. We used functional magnetic resonance imaging (fMRI) in 49 children to examine whether those with typical, but discrepantly low reading ability relative to IQ, show dyslexia-like activation patterns during reading. Children who were typical readers with high-IQ discrepancy showed reduced activation in left temporoparietal neocortex relative to two control groups of typical readers without IQ discrepancy. This pattern was consistent and spatially overlapping with results in children with RD compared to typically reading children. The results suggest a shared neurological atypicality in regions associated with phonological processing between children with dyslexia and children with typical reading ability that is substantially below their IQ.

  5. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    Science.gov (United States)

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  6. WHEN MODEL MEETS REALITY – A REVIEW OF SPAR LEVEL 2 MODEL AGAINST FUKUSHIMA ACCIDENT

    Energy Technology Data Exchange (ETDEWEB)

    Zhegang Ma

    2013-09-01

    The Standardized Plant Analysis Risk (SPAR) models are a set of probabilistic risk assessment (PRA) models used by the Nuclear Regulatory Commission (NRC) to evaluate the risk of operations at U.S. nuclear power plants and provide inputs to risk informed regulatory process. A small number of SPAR Level 2 models have been developed mostly for feasibility study purpose. They extend the Level 1 models to include containment systems, group plant damage states, and model containment phenomenology and accident progression in containment event trees. A severe earthquake and tsunami hit the eastern coast of Japan in March 2011 and caused significant damages on the reactors in Fukushima Daiichi site. Station blackout (SBO), core damage, containment damage, hydrogen explosion, and intensive radioactivity release, which have been previous analyzed and assumed as postulated accident progression in PRA models, now occurred with various degrees in the multi-units Fukushima Daiichi site. This paper reviews and compares a typical BWR SPAR Level 2 model with the “real” accident progressions and sequences occurred in Fukushima Daiichi Units 1, 2, and 3. It shows that the SPAR Level 2 model is a robust PRA model that could very reasonably describe the accident progression for a real and complicated nuclear accident in the world. On the other hand, the comparison shows that the SPAR model could be enhanced by incorporating some accident characteristics for better representation of severe accident progression.

  7. Detection of the Typical Pulse Condition on Cun-Guan-Chi Based on Image Sensor

    Directory of Open Access Journals (Sweden)

    Aihua ZHANG

    2014-02-01

    Full Text Available In order to simulate the diagnosis by feeling the pulse with Traditional Chinese Medicine, a device based on CCD was designed to detect the pulse image of Cun-Guan-Chi. Using the MM-3 pulse model as experimental subject, the synchronous pulse image data of some typical pulse condition were collected by this device on Cun-Guan-Chi. The typical pulses include the normal pulse, the slippery pulse, the slow pulse and the soft pulse. According to the lens imaging principle, the pulse waves were extracted by using the area method, then the 3D pulse condition image was restructured and some features were extracted including the period, the frequency, the width, and the length. The slippery pulse data of pregnant women were collected by this device, and the pulse images were analyzed. The results are consistent based on comparing the features of the slippery pulse model with the slippery pulse of pregnant women. This study overcame shortages of the existing detection device such as the few detecting parts and the limited information, and more comprehensive 3D pulse condition information could be obtained. This work laid a foundation for realizing the objective diagnosis and revealing the comprehensive information of the pulse.

  8. Geolocating fish using Hidden Markov Models and Data Storage Tags

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Pedersen, Martin Wæver; Madsen, Henrik

    2009-01-01

    Geolocation of fish based on data from archival tags typically requires a statistical analysis to reduce the effect of measurement errors. In this paper we present a novel technique for this analysis, one based on Hidden Markov Models (HMM's). We assume that the actual path of the fish is generated...... by a biased random walk. The HMM methodology produces, for each time step, the probability that the fish resides in each grid cell. Because there is no Monte Carlo step in our technique, we are able to estimate parameters within the likelihood framework. The method does not require the distribution...... of inference in state-space models of animals. The technique can be applied to geolocation based on light, on tidal patterns, or measurement of other variables that vary with space. We illustrate the method through application to a simulated data set where geolocation relies on depth data exclusively....

  9. Gender Gap in the National College Entrance Exam Performance in China: A Case Study of a Typical Chinese Municipality

    Science.gov (United States)

    Zhang, Yu; Tsang, Mun

    2015-01-01

    This is one of the first studies to investigate gender achievement gap in the National College Entrance Exam in a typical municipality in China, which is the crucial examination for the transition from high school to higher education in that country. Using ordinary least square model and quantile regression model, the study consistently finds that…

  10. Advanced software tool for the creation of a typical meteorological year

    International Nuclear Information System (INIS)

    Skeiker, Kamal; Ghani, Bashar Abdul

    2008-01-01

    The generation of a typical meteorological year is of great importance for calculations concerning many applications in the field of thermal engineering. In this context, method that has been proposed by Hall et al. is selected for generating typical data, and an improved criterion for final selection of typical meteorological month (TMM) was demonstrated. The final selection of the most representative year was done by examining a composite score S. The composite score was calculated as the weighed sum of the scores of the four meteorological parameters used. These parameters are air dry bulb temperature, relative humidity, wind velocity and global solar radiation intensity. Moreover, a new modern software tool using Delphi 6.0 has been developed, utilizing the Filkenstein-Schafer statistical method for the creation of a typical meteorological year for any site of concern. Whereas, an improved criterion for final selection of typical meteorological month was employed. Such tool allows the user to perform this task without an intimate knowledge of all of the computational details. The final alphanumerical and graphical results are presented on screen, and can be saved to a file or printed as a hard copy. Using this software tool, a typical meteorological year was generated for Damascus, capital of Syria, as a test run example. The data processed used were obtained from the Department of Meteorology and cover a period of 10 years (1991-2000)

  11. Wildfire Risk Assessment in a Typical Mediterranean Wildland-Urban Interface of Greece

    Science.gov (United States)

    Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita

    2015-04-01

    The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.

  12. Wildfire risk assessment in a typical Mediterranean wildland-urban interface of Greece.

    Science.gov (United States)

    Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita

    2015-04-01

    The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.

  13. Apples are not the only fruit: The effects of concept typicality on semantic representation in the anterior temporal lobe

    Directory of Open Access Journals (Sweden)

    Anna M. Woollams

    2012-04-01

    Full Text Available Intuitively, an apple seems a fairly good example of a fruit, whereas an avocado seems less so. The extent to which an exemplar is representative of its category, a variable known as concept typicality, has long been thought to be a key dimension determining semantic representation. Concept typicality is, however, correlated with a number of other variables, in particular age of acquisition and name frequency. Consideration of picture naming accuracy from a large case-series of semantic dementia patients demonstrated strong effects of concept typicality that were maximal in the moderately impaired patients, over and above the impact of age of acquisition and name frequency. Induction of a temporary virtual lesion to the left anterior temporal lobe, the region most commonly affected in semantic dementia, via repetitive Transcranial Magnetic Stimulation produced an enhanced effect of concept typicality in the picture naming of normal participants, but did not affect the magnitude of the age of acquisition or name frequency effects. These results indicate that concept typicality exerts its influence on semantic representations themselves, as opposed to the strength of connections outside the semantic system. To date, there has been little direct exploration of the dimension of concept typicality within connectionist models of intact and impaired conceptual representation, and these findings provide a target for future computational simulation.

  14. The contribution of diffusion-weighted MR imaging to distinguishing typical from atypical meningiomas

    Energy Technology Data Exchange (ETDEWEB)

    Hakyemez, Bahattin [Uludag University School of Medicine, Department of Radiology, Gorukle, Bursa (Turkey); Bursa State Hospital, Department of Radiology, Bursa (Turkey); Yildirim, Nalan; Gokalp, Gokhan; Erdogan, Cuneyt; Parlak, Mufit [Uludag University School of Medicine, Department of Radiology, Gorukle, Bursa (Turkey)

    2006-08-15

    Atypical/malignant meningiomas recur more frequently then typical meningiomas. In this study, the contribution of diffusion-weighted MR imaging to the differentiation of atypical/malignant and typical meningiomas and to the determination of histological subtypes of typical meningiomas was investigated. The study was performed prospectively on 39 patients. The signal intensity of the lesions was evaluated on trace and apparent diffusion coefficient (ADC) images. ADC values were measured in the lesions and peritumoral edema. Student's t-test was used for statistical analysis. P<0.05 was considered statistically significant. Mean ADC values in atypical/malignant and typical meningiomas were 0.75{+-}0.21 and 1.17{+-}0.21, respectively. Mean ADC values for subtypes of typical meningiomas were as follows: meningothelial, 1.09{+-}0.20; transitional, 1.19{+-}0.07; fibroblastic, 1.29{+-}0.28; and angiomatous, 1.48{+-}0.10. Normal white matter was 0.91{+-}0.10. ADC values of typical meningiomas and atypical/malignant meningiomas significantly differed (P<0.001). However, the difference between peritumoral edema ADC values was not significant (P>0.05). Furthermore, the difference between the subtypes of typical meningiomas and atypical/malignant meningiomas was significant (P<0.001). Diffusion-weighted MR imaging findings of atypical/malignant meningiomas and typical meningiomas differ. Atypical/malignant meningiomas have lower intratumoral ADC values than typical meningiomas. Mean ADC values for peritumoral edema do not differ between typical and atypical meningiomas. (orig.)

  15. Theory of Mind experience sampling in typical adults.

    Science.gov (United States)

    Bryant, Lauren; Coffey, Anna; Povinelli, Daniel J; Pruett, John R

    2013-09-01

    We explored the frequency with which typical adults make Theory of Mind (ToM) attributions, and under what circumstances these attributions occur. We used an experience sampling method to query 30 typical adults about their everyday thoughts. Participants carried a Personal Data Assistant (PDA) that prompted them to categorize their thoughts as Action, Mental State, or Miscellaneous at approximately 30 pseudo-random times during a continuous 10-h period. Additionally, participants noted the direction of their thought (self versus other) and degree of socializing (with people versus alone) at the time of inquiry. We were interested in the relative frequency of ToM (mental state attributions) and how prominent they were in immediate social exchanges. Analyses of multiple choice answers suggest that typical adults: (1) spend more time thinking about actions than mental states and miscellaneous things, (2) exhibit a higher degree of own- versus other-directed thought when alone, and (3) make mental state attributions more frequently when not interacting (offline) than while interacting with others (online). A significant 3-way interaction between thought type, direction of thought, and socializing emerged because action but not mental state thoughts about others occurred more frequently when participants were interacting with people versus when alone; whereas there was an increase in the frequency of both action and mental state attributions about the self when participants were alone as opposed to socializing. A secondary analysis of coded free text responses supports findings 1-3. The results of this study help to create a more naturalistic picture of ToM use in everyday life and the method shows promise for future study of typical and atypical thought processes. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. SARDAN- A program for the transients simulation in a typical PWR plant

    International Nuclear Information System (INIS)

    Mattos Santos, R.L.P. de.

    1979-10-01

    A program in FORTRAN-IV language was developed that simulates the behaviour of the primary circuit in a typical PWR plant during condition II transients, in particular uncontrolled withdrawal of a control rod set, control rod set drops and uncontrolled boron dilution. It the mathematical model adopted the reactor core, the hot piping to which a pressurizer is coupled, the steam generator and the cold piping are considered. The results obtained in the analysis of the mentioned accidents are compared to those present at the Final Safety Analysis Report (FSAR) of the Angra-1 reactor and are considered satisfactory. (F.E.) [pt

  17. Typical intellectual engagement and cognition in old age.

    Science.gov (United States)

    Dellenbach, Myriam; Zimprich, Daniel

    2008-03-01

    Typical Intellectual Engagement (TIE) comprises the preference to engage in cognitively demanding activities and has been proposed as a potential explanatory variable of individual differences in cognitive abilities. Little is known, however, about the factorial structure of TIE, its relations to socio-demographic variables, and its influence on intellectual functioning in old age. In the present study, data of 364 adults (65-81 years) from the Zurich Longitudinal Study on Cognitive Aging (ZULU) were used to investigate the factorial structure of TIE and to examine the hypothesis that TIE is associated more strongly with crystallized intelligence than with fluid intelligence in old age. A measurement model of a second order factor based on a structure of four correlated first order factors (Reading, Problem Solving, Abstract Thinking, and Intellectual Curiosity) evinced an excellent fit. After controlling for age, sex, and formal education, TIE was more strongly associated with crystallized intelligence than with fluid intelligence, comparable to results in younger persons. More detailed analyses showed that this association is mostly defined via Reading and Intellectual Curiosity.

  18. Contamination profile on typical printed circuit board assemblies vs soldering process

    DEFF Research Database (Denmark)

    Conseil, Helene; Jellesen, Morten Stendahl; Ambat, Rajan

    2014-01-01

    Purpose – The purpose of this paper was to analyse typical printed circuit board assemblies (PCBAs) processed by reflow, wave or selective wave soldering for typical levels of process-related residues, resulting from a specific or combination of soldering processes. Typical solder flux residue...... structure was identified by Fourier transform infrared spectroscopy, while the concentration was measured using ion chromatography, and the electrical properties of the extracts were determined by measuring the leak current using a twin platinum electrode set-up. Localized extraction of residue was carried...

  19. Standard model baryogenesis through four-fermion operators in braneworlds

    International Nuclear Information System (INIS)

    Chung, Daniel J.H.; Dent, Thomas

    2002-01-01

    We study a new baryogenesis scenario in a class of braneworld models with low fundamental scale, which typically have difficulty with baryogenesis. The scenario is characterized by its minimal nature: the field content is that of the standard model and all interactions consistent with the gauge symmetry are admitted. Baryon number is violated via a dimension-6 proton decay operator, suppressed today by the mechanism of quark-lepton separation in extra dimensions; we assume that this operator was unsuppressed in the early Universe due to a time-dependent quark-lepton separation. The source of CP violation is the CKM matrix, in combination with the dimension-6 operators. We find that almost independently of cosmology, sufficient baryogenesis is nearly impossible in such a scenario if the fundamental scale is above 100 TeV, as required by an unsuppressed neutron-antineutron oscillation operator. The only exception producing sufficient baryon asymmetry is a scenario involving out-of-equilibrium c quarks interacting with equilibrium b quarks

  20. Security of statistical data bases: invasion of privacy through attribute correlational modeling

    Energy Technology Data Exchange (ETDEWEB)

    Palley, M.A.

    1985-01-01

    This study develops, defines, and applies a statistical technique for the compromise of confidential information in a statistical data base. Attribute Correlational Modeling (ACM) recognizes that the information contained in a statistical data base represents real world statistical phenomena. As such, ACM assumes correlational behavior among the database attributes. ACM proceeds to compromise confidential information through creation of a regression model, where the confidential attribute is treated as the dependent variable. The typical statistical data base may preclude the direct application of regression. In this scenario, the research introduces the notion of a synthetic data base, created through legitimate queries of the actual data base, and through proportional random variation of responses to these queries. The synthetic data base is constructed to resemble the actual data base as closely as possible in a statistical sense. ACM then applies regression analysis to the synthetic data base, and utilizes the derived model to estimate confidential information in the actual database.

  1. Children's Everyday Learning by Assuming Responsibility for Others: Indigenous Practices as a Cultural Heritage Across Generations.

    Science.gov (United States)

    Fernández, David Lorente

    2015-01-01

    This chapter uses a comparative approach to examine the maintenance of Indigenous practices related with Learning by Observing and Pitching In in two generations--parent generation and current child generation--in a Central Mexican Nahua community. In spite of cultural changes and the increase of Western schooling experience, these practices persist, to different degrees, as a Nahua cultural heritage with close historical relations to the key value of cuidado (stewardship). The chapter explores how children learn the value of cuidado in a variety of everyday activities, which include assuming responsibility in many social situations, primarily in cultivating corn, raising and protecting domestic animals, health practices, and participating in family ceremonial life. The chapter focuses on three main points: (1) Cuidado (assuming responsibility for), in the Nahua socio-cultural context, refers to the concepts of protection and "raising" as well as fostering other beings, whether humans, plants, or animals, to reach their potential and fulfill their development. (2) Children learn cuidado by contributing to family endeavors: They develop attention and self-motivation; they are capable of responsible actions; and they are able to transform participation to achieve the status of a competent member of local society. (3) This collaborative participation allows children to continue the cultural tradition and to preserve a Nahua heritage at a deeper level in a community in which Nahuatl language and dress have disappeared, and people do not identify themselves as Indigenous. © 2015 Elsevier Inc. All rights reserved.

  2. Modeling of the evolution of bubble size distribution of gas-liquid flow inside a large vertical pipe. Influence of bubble coalescence and breakup models

    International Nuclear Information System (INIS)

    Liao, Yixiang; Lucas, Dirk

    2011-01-01

    The range of gas-liquid flow applications in today's technology is immensely wide. Important examples can be found in chemical reactors, boiling and condensation equipments as well as nuclear reactors. In gas-liquid flows, the bubble size distribution plays an important role in the phase structure and interfacial exchange behaviors. It is therefore necessary to take into account the dynamic change of the bubble size distribution to get good predictions in CFD. An efficient 1D Multi-Bubble-Size-Class Test Solver was introduced in Lucas et al. (2001) for the simulation of the development of the flow structure along a vertical pipe. The model considers a large number of bubble classes. It solves the radial profiles of liquid and gas velocities, bubble-size class resolved gas fraction profiles as well as turbulence parameters on basis of the bubble size distribution present at the given axial position. The evolution of the flow along the height is assumed to be solely caused by the progress of bubble coalescence and break-up resulting in a bubble size distribution changing in the axial direction. In this model, the bubble coalescence and breakup models are very important for reasonable predictions of the bubble size distribution. Many bubble coalescence and breakup models have been proposed in the literature. However, some obvious discrepancies exist in the models; for example, the daughter bubble size distributions are greatly different from different bubble breakup models, as reviewed in our previous publication (Liao and Lucas, 2009a; 2010). Therefore, it is necessary to compare and evaluate typical bubble coalescence and breakup models that have been commonly used in the literature. Thus, this work is aimed to make a comparison of several typical bubble coalescence and breakup models and to discuss in detail the ability of the Test Solver to predict the evolution of bubble size distribution. (orig.)

  3. The accident consequence model of the German safety study

    International Nuclear Information System (INIS)

    Huebschmann, W.

    1977-01-01

    The accident consequence model essentially describes a) the diffusion in the atmosphere and deposition on the soil of radioactive material released from the reactor into the atmosphere; b) the irradiation exposure and health consequences of persons affected. It is used to calculate c) the number of persons suffering from acute or late damage, taking into account possible counteractions such as relocation or evacuation, and d) the total risk to the population from the various types of accident. The model, the underlying parameters and assumptions are described. The bone marrow dose distribution is shown for the case of late overpressure containment failure, which is discussed in the paper of Heuser/Kotthoff, combined with four typical weather conditions. The probability distribution functions for acute mortality, late incidence of cancer and genetic damage are evaluated, assuming a characteristic population distribution. The aim of these calculations is first the presentation of some results of the consequence model as an example, in second the identification of problems, which need possibly in a second phase of study to be evaluated in more detail. (orig.) [de

  4. Three Dimensional Characterization of Typical Urban and Desert Particles: Implications to Particle Optics

    Science.gov (United States)

    Goel, V.; Mishra, S.; Ahlawat, A. S.; Sharma, C.; Kotnala, R. K.

    2017-12-01

    Aerosol particles are generally considered as chemically homogeneous spheres in the retrieval techniques of ground and space borne observations which is not accurate approach and can lead to erroneous observations. For better simulation of optical and radiative properties of aerosols, a good knowledge of aerosol's morphology, chemical composition and internal structure is essential. Till date, many studies have reported the morphology and chemical composition of particles but very few of them provide internal structure and spatial distribution of different chemical species within the particle. The research on the effect of particle internal structure and its contribution to particle optics is extremely limited. In present work, we characterize the PM10 particles collected form typical arid (the Thar Desert, Rajasthan, India) and typical urban (New Delhi, India) environment using microscopic techniques. The particles were milled several times to investigate their internal structure. The EDS (Energy Dispersive X-ray Spectroscopy) spectra were recorded after each milling to check the variation in the chemical composition. In arid environment, Fe, Ca, C, Al, and Mg rich shell was observed over a Si rich particle whereas in urban environment, shell of Hg, Ag, C and N was observed over a Cu rich particle. Based on the observations, different model shapes [homogenous sphere and spheroid; heterogeneous sphere and spheroid; core shell] have been considered for assessing the associated uncertainties with the routine modeling of optical properties where volume equivalent homogeneous sphere approximation is considered. The details will be discussed during presentation.

  5. Explaining the Cosmic-Ray e+/(e- + e+) and (bar p)/p Ratios Using a Steady-State Injection Model

    International Nuclear Information System (INIS)

    Lee, S.H.; Kamae, T.; Baldini, L.; Giordano, F.; Grondin, M.H.; Latronico, L.; Lemoine-Goumard, M.; Sgro, C.; Tanaka, T.; Uchiyama, Y.

    2011-01-01

    We present a model of cosmic ray (CR) injection into the Galactic space based on recent γ-ray observations of supernova remnants (SNRs) and pulsar wind nebulae (PWNe) by the Fermi Large Area Telescope (Fermi) and imaging atmospheric Cherenkov telescopes (IACTs). Steady-state injection of nuclear particles and electrons (e - ) from the Galactic ensemble of SNRs, and electrons and positrons (e + ) from the Galactic ensemble of PWNe are assumed, with their injection spectra inferred under guidance of γ-ray observations and recent development of evolution and emission models. The ensembles of SNRs and PWNe are assumed to share the same spatial distributions. Assessment of possible secondary CR contribution from dense molecular clouds interacting with SNRs is also given. CR propagation in the interstellar space is handled by GALPROP. Different underlying source distribution models and Galaxy halo sizes are employed to estimate the systematic uncertainty of the model. We show that this observation-based model reproduces the positron fraction e + /(e - + e + ) and antiproton-to-proton ratio ((bar p)/p) reported by PAMELA and other previous missions reasonably well, without calling for any speculative sources. A discrepancy remains, however, between the total e - + e + spectrum measured by Fermi and our model below ∼ 20 GeV, for which the potential causes are discussed. Important quantities for Galactic CRs including their energy injection, average lifetime in the Galaxy, and mean gas density along their typical propagation path are also estimated.

  6. Bayesian Analysis of Multilevel Probit Models for Data with Friendship Dependencies

    Science.gov (United States)

    Koskinen, Johan; Stenberg, Sten-Ake

    2012-01-01

    When studying educational aspirations of adolescents, it is unrealistic to assume that the aspirations of pupils are independent of those of their friends. Considerable attention has also been given to the study of peer influence in the educational and behavioral literature. Typically, in empirical studies, the friendship networks have either been…

  7. Sources of Sodium in the Lunar Exosphere: Modeling Using Ground-Based Observations of Sodium Emission and Spacecraft Data of the Plasma

    Science.gov (United States)

    Sarantos, Menelaos; Killen, Rosemary M.; Sharma, A. Surjalal; Slavin, James A.

    2009-01-01

    Observations of the equatorial lunar sodium emission are examined to quantify the effect of precipitating ions on source rates for the Moon's exospheric volatile species. Using a model of exospheric sodium transport under lunar gravity forces, the measured emission intensity is normalized to a constant lunar phase angle to minimize the effect of different viewing geometries. Daily averages of the solar Lyman alpha flux and ion flux are used as the input variables for photon-stimulated desorption (PSD) and ion sputtering, respectively, while impact vaporization due to the micrometeoritic influx is assumed constant. Additionally, a proxy term proportional to both the Lyman alpha and to the ion flux is introduced to assess the importance of ion-enhanced diffusion and/or chemical sputtering. The combination of particle transport and constrained regression models demonstrates that, assuming sputtering yields that are typical of protons incident on lunar soils, the primary effect of ion impact on the surface of the Moon is not direct sputtering but rather an enhancement of the PSD efficiency. It is inferred that the ion-induced effects must double the PSD efficiency for flux typical of the solar wind at 1 AU. The enhancement in relative efficiency of PSD due to the bombardment of the lunar surface by the plasma sheet ions during passages through the Earth's magnetotail is shown to be approximately two times higher than when it is due to solar wind ions. This leads to the conclusion that the priming of the surface is more efficiently carried out by the energetic plasma sheet ions.

  8. Numerical Simulations for a Typical Train Fire in China

    Directory of Open Access Journals (Sweden)

    W. K. Chow

    2011-01-01

    Full Text Available Railway is the key transport means in China including the Mainland, Taiwan, and Hong Kong. Consequent to so many big arson and accidental fires in the public transport systems including trains and buses, fire safety in passenger trains is a concern. Numerical simulations with Computational Fluid Dynamics on identified fire scenarios with typical train compartments in China will be reported in this paper. The heat release rate of the first ignited item was taken as the input parameter. The mass lost rate of fuel vapor of other combustibles was estimated to predict the resultant heat release rates by the combustion models in the software. Results on air flow, velocity vectors, temperature distribution, smoke layer height, and smoke spread patterns inside the train compartment were analyzed. The results are useful for working out appropriate fire safety measures for train vehicles and determining the design fire for subway stations and railway tunnels.

  9. Thermohidraulic model for a typical steam generator of PWR Nuclear Power Plants

    International Nuclear Information System (INIS)

    Braga, C.V.M.

    1980-06-01

    A model of thermohidraulic simulation, for steady state, considering the secondary flow divided in two parts individually homogeneous, and with heat and mass transferences between them is developed. The quality of the two-phase mixture that is fed to the turbine is fixed and, based on this value, the feedwater pressure is determined. The recirculation ratio is intrinsically determined. Based on this model it was developed the GEVAP code, in Fortran-IV language. The model is applied to the steam generator of the Angra II nuclear power plant and the results are compared with KWU'S design parameters, being considered satisfactory. (Author) [pt

  10. A real case simulation of the air-borne effluent dispersion on a typical summer day under CDA scenario for PFBR using an advanced meteorological and dispersion model

    International Nuclear Information System (INIS)

    Srinivas, C.V; Venkatesan, R.; Bagavath Singh, A.; Somayaji, K.M.

    2003-11-01

    Environmental concentrations and radioactive doses within and beyond the site boundary for the CDA situation of PFBR have been estimated using an Advanced Radiological Impact Prediction system for a real atmospheric situation on a typical summer day in the month of May 2003. The system consists of a meso-scale atmospheric prognostic model MM5 coupled with a random walk Lagrangian particle dispersion model FLEXPART for the simulation of transport, diffusion and deposition of radio nuclides. The details of the modeling system, its capabilities and various features are presented. The model has been validated for the simulated coastal atmospheric features of land-sea breeze, development of TIBL etc., with site and regional meteorological observations from IMD. Analysis of the dose distribution in a situation that corresponds to the atmospheric conditions on the chosen day shows that the doses for CDA through different pathways are 8 times less than the earlier estimations made according to regulatory requirements using the Gaussian Plume Model (GPM) approach. However for stack releases a higher dose than was reported earlier occurred beyond the site boundary at 2-4 km range under stable and fumigation conditions. The doses due to stack releases under these conditions maintained almost the same value in 3 to 10 km range and decreased there after. Deposition velocities computed from radionuclide species, wind speed, surface properties were 2 orders lower than the values used earlier and hence gave more realistic estimates of ground deposited activity. The study has enabled to simulate the more complex meteorological situation that actually is present at the site of interest and the associated spatial distribution of radiological impact around Kalpakkam. In order to draw meaningful conclusion that can be compared with regulatory estimates future study would be undertaken to simulate the dispersion under extreme meteorological situations which could possibly be worse than

  11. Typically Female Features in Hungarian Shopping Tourism

    Directory of Open Access Journals (Sweden)

    Gábor Michalkó

    2006-06-01

    Full Text Available Although shopping has been long acknowledged as a major tourist activity, the extent and characteristics of shopping tourism have only recently become the subject of academic research and discussion. As a contribution to this field of knowledge, the paper presents the characteristics of shopping tourism in Hungary, and discusses the typically female features of outbound Hungarian shopping tourism. The research is based on a survey of 2473 Hungarian tourists carried out in 2005. As the findings of the study indicate, while female respondents were altogether more likely to be involved in tourist shopping than male travellers, no significant difference was experienced between the genders concerning the share of shopping expenses compared to their total travel budget. In their shopping behaviour, women were typically affected by price levels, and they proved to be both more selfish and more altruistic than men by purchasing more products for themselves and for their family members. The most significant differences between men and women were found in their product preferences as female tourists were more likely to purchase typically feminine goods such as clothes, shoes, bags and accessories, in the timing of shopping activities while abroad, and in the information sources used by tourists, since interpersonal influences such as friends’, guides’ and fellow travellers’ recommendations played a higher role in female travellers’ decisions.

  12. Lotka-Volterra competition models for sessile organisms.

    Science.gov (United States)

    Spencer, Matthew; Tanner, Jason E

    2008-04-01

    Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.

  13. An enhanced tokamak startup model

    Science.gov (United States)

    Goswami, Rajiv; Artaud, Jean-François

    2017-01-01

    The startup of tokamaks has been examined in the past in varying degree of detail. This phase typically involves the burnthrough of impurities and the subsequent rampup of plasma current. A zero-dimensional (0D) model is most widely used where the time evolution of volume averaged quantities determines the detailed balance between the input and loss of particle and power. But, being a 0D setup, these studies do not take into consideration the co-evolution of plasma size and shape, and instead assume an unchanging minor and major radius. However, it is known that the plasma position and its minor radius can change appreciably as the plasma evolves in time to fill in the entire available volume. In this paper, an enhanced model for the tokamak startup is introduced, which for the first time takes into account the evolution of plasma geometry during this brief but highly dynamic period by including realistic one-dimensional (1D) effects within the broad 0D framework. In addition the effect of runaway electrons (REs) has also been incorporated. The paper demonstrates that the inclusion of plasma cross section evolution in conjunction with REs plays an important role in the formation and development of tokamak startup. The model is benchmarked against experimental results from ADITYA tokamak.

  14. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake

    Directory of Open Access Journals (Sweden)

    Chuan Luo

    2015-09-01

    Full Text Available The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS model to predict runoff, total nitrogen (TN and total phosphorus (TP loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.

  15. Computational fluid dynamics modeling of rope-guided conveyances in two typical kinds of shaft layouts.

    Directory of Open Access Journals (Sweden)

    Renyuan Wu

    Full Text Available The behavior of rope-guided conveyances is so complicated that the rope-guided hoisting system hasn't been understood thoroughly so far. In this paper, with user-defined functions loaded, ANSYS FLUENT 14.5 was employed to simulate lateral motion of rope-guided conveyances in two typical kinds of shaft layouts. With rope-guided mine elevator and mine cages taken into account, results show that the lateral aerodynamic buffeting force is much larger than the Coriolis force, and the side aerodynamic force have the same order of magnitude as the Coriolis force. The lateral aerodynamic buffeting forces should also be considered especially when the conveyance moves along the ventilation air direction. The simulation shows that the closer size of the conveyances can weaken the transverse aerodynamic buffeting effect.

  16. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1......,...,X_{T} given the initial values X_{-n}, n=0,1,..., as is usually done. The initial values are not modeled but assumed to be bounded. This represents a considerable generalization relative to all previous work where it is assumed that initial values are zero. For the statistical analysis we assume...... the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...

  17. The experimental electron mean-free-path in Si under typical (S)TEM conditions

    International Nuclear Information System (INIS)

    Potapov, P.L.

    2014-01-01

    The electron mean-free-path in Si was measured by EELS using the test structure with the certified dimensions as a calibration standard. In a good agreement with the previous CBED measurements, the mean-free-path is 150 nm for 200 keV and 179 nm for 300 keV energy of primary electrons at large collection angles. These values are accurately predicted by the model of Iakoubovskii et al. while the model of Malis et al. incorporated in common microscopy software underestimates the mean-free-path by 15% at least. Correspondingly, the thickness of TEM samples reported in many studies of the Si-based materials last decades might be noticeably underestimated. - Highlights: • The electron inelastic mean-free-path in Si is measured for the typical (S)TEM conditions. • These reference values allow for accurate determination of the lamella thickness by EELS. • The theoretical model by Malis et al. underestimates the mean-free-path values

  18. CFD model of diabatic annular two-phase flow using the Eulerian–Lagrangian approach

    International Nuclear Information System (INIS)

    Li, Haipeng; Anglart, Henryk

    2015-01-01

    Highlights: • A CFD model of annular two-phase flow with evaporating liquid film has been developed. • A two-dimensional liquid film model is developed assuming that the liquid film is sufficiently thin. • The liquid film model is coupled to the gas core flow, which is represented using the Eulerian–Lagrangian approach. - Abstract: A computational fluid dynamics (CFD) model of annular two-phase flow with evaporating liquid film has been developed based on the Eulerian–Lagrangian approach, with the objective to predict the dryout occurrence. Due to the fact that the liquid film is sufficiently thin in the diabatic annular flow and at the pre-dryout conditions, it is assumed that the flow in the wall normal direction can be neglected, and the spatial gradients of the dependent variables tangential to the wall are negligible compared to those in the wall normal direction. Subsequently the transport equations of mass, momentum and energy for liquid film are integrated in the wall normal direction to obtain two-dimensional equations, with all the liquid film properties depth-averaged. The liquid film model is coupled to the gas core flow, which currently is represented using the Eulerian–Lagrangian technique. The mass, momentum and energy transfers between the liquid film, gas, and entrained droplets have been taken into account. The resultant unified model for annular flow has been applied to the steam–water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show favorable agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate

  19. A mathematical model of the growth of uterine myomas.

    Science.gov (United States)

    Chen, C Y; Ward, J P

    2014-12-01

    Uterine myomas or fibroids are common, benign smooth muscle tumours that can grow to 10  cm or more in diameter and are routinely removed surgically. They are typically slow- growing, well-vascularised, spherical tumours that, on a macro-scale, are a structurally uniform, hard elastic material. We present a multi-phase mathematical model of a fully vascularised myoma growing within a surrounding elastic tissue. Adopting a continuum approach, the model assumes the conservation of mass and momentum of four phases, namely cells/collagen, extracellular fluid, arterial and venous phases. The cell/collagen phase is treated as a poro-elastic material, based on a linear stress-strain relationship, and Darcy's law is applied to describe flow in the extracellular fluid and the two vascular phases. The supply of extracellular fluid is dependent on the capillary flow rate and mean capillary pressure expressed in terms of the arterial and venous pressures. Cell growth and division is limited to the myoma domain and dependent on the local stress in the material. The resulting model consists of a system of nonlinear partial differential equations with two moving boundaries. Numerical solutions of the model successfully reproduce qualitatively the clinically observed three-phase "fast-slow-fast" growth profile that is typical for myomas. The results suggest that this growth profile requires stress-induced resistance to growth by the surrounding tissue and a switch-like cell growth response to stress. Analysis of large-time solutions reveal that while there is a functioning vasculature throughout the myoma, exponential growth results, otherwise power-law growth is predicted. An extensive survey of the effect of parameters on model solutions is also presented, and in particular, the enhanced growth caused by factors such as oestrogen is predicted by the model.

  20. Typical load shapes for six categories of Swedish commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Noren, C.

    1997-01-01

    In co-operation with several Swedish electricity suppliers, typical load shapes have been developed for six categories of commercial buildings located in the south of Sweden. The categories included in the study are: hotels, warehouses/grocery stores, schools with no kitchen, schools with kitchen, office buildings, health, health buildings. Load shapes are developed for different mean daily outdoor temperatures and for different day types, normally standard weekdays and standard weekends. The load shapes are presented as non-dimensional normalized 1-hour load. All measured loads for an object are divided by the object`s mean load during the measuring period and typical load shapes are developed for each category of buildings. Thus errors were kept lower as compared to use of W/m{sup 2}-terms. Typical daytime (9 a.m. - 5 p.m.) standard deviations are 7-10% of the mean values for standard weekdays but during very cold or warm weather conditions, single objects can deviate from the typical load shape. On weekends, errors are higher and depending on very different activity levels in the buildings, it is difficult to develop weekend load shapes with good accuracy. The method presented is very easy to use for similar studies and no building simulation programs are needed. If more load data is available, a good method to lower the errors is to make sure that every category only consists of objects with the same activity level, both on weekdays and weekends. To make it easier to use the load shapes, Excel load shape workbooks have been developed, where it is even possible to compare typical load shapes with measured data. 23 refs, 53 figs, 20 tabs

  1. Incoherent SSI Analysis of Reactor Building using 2007 Hard-Rock Coherency Model

    International Nuclear Information System (INIS)

    Kang, Joo-Hyung; Lee, Sang-Hoon

    2008-01-01

    Many strong earthquake recordings show the response motions at building foundations to be less intense than the corresponding free-field motions. To account for these phenomena, the concept of spatial variation, or wave incoherence was introduced. Several approaches for its application to practical analysis and design as part of soil-structure interaction (SSI) effect have been developed. However, conventional wave incoherency models didn't reflect the characteristics of earthquake data from hard-rock site, and their application to the practical nuclear structures on the hard-rock sites was not justified sufficiently. This paper is focused on the response impact of hard-rock coherency model proposed in 2007 on the incoherent SSI analysis results of nuclear power plant (NPP) structure. A typical reactor building of pressurized water reactor (PWR) type NPP is modeled classified into surface and embedded foundations. The model is also assumed to be located on medium-hard rock and hard-rock sites. The SSI analysis results are obtained and compared in case of coherent and incoherent input motions. The structural responses considering rocking and torsion effects are also investigated

  2. Typical and Atypical Development of Basic Numerical Skills in Elementary School

    Science.gov (United States)

    Landerl, Karin; Kolle, Christina

    2009-01-01

    Deficits in basic numerical processing have been identified as a central and potentially causal problem in developmental dyscalculia; however, so far not much is known about the typical and atypical development of such skills. This study assessed basic number skills cross-sectionally in 262 typically developing and 51 dyscalculic children in…

  3. TYPICAL FORMS OF LIVER PATHOLOGY IN CHILDREN

    Directory of Open Access Journals (Sweden)

    Peter F. Litvitskiy

    2018-01-01

    Full Text Available This lecture for the system of postgraduate medical education analyzes causes, types, key links of pathogenesis, and manifestations of the main typical forms of liver pathology — liver failure, hepatic coma, jaundice, cholemia, acholia, cholelithiasis, and their complications in children. To control the retention of the lecture material, case problems and multiple-choice tests are given.

  4. Reasons People Surrender Unowned and Owned Cats to Australian Animal Shelters and Barriers to Assuming Ownership of Unowned Cats.

    Science.gov (United States)

    Zito, Sarah; Morton, John; Vankan, Dianne; Paterson, Mandy; Bennett, Pauleen C; Rand, Jacquie; Phillips, Clive J C

    2016-01-01

    Most cats surrendered to nonhuman animal shelters are identified as unowned, and the surrender reason for these cats is usually simply recorded as "stray." A cross-sectional study was conducted with people surrendering cats to 4 Australian animal shelters. Surrenderers of unowned cats commonly gave surrender reasons relating to concern for the cat and his/her welfare. Seventeen percent of noncaregivers had considered adopting the cat. Barriers to assuming ownership most commonly related to responsible ownership concerns. Unwanted kittens commonly contributed to the decision to surrender for both caregivers and noncaregivers. Nonowners gave more surrender reasons than owners, although many owners also gave multiple surrender reasons. These findings highlight the multifactorial nature of the decision-making process leading to surrender and demonstrate that recording only one reason for surrender does not capture the complexity of the surrender decision. Collecting information about multiple reasons for surrender, particularly reasons for surrender of unowned cats and barriers to assuming ownership, could help to develop strategies to reduce the number of cats surrendered.

  5. A tissue adaptation model based on strain-dependent collagen degradation and contact-guided cell traction.

    Science.gov (United States)

    Heck, T A M; Wilson, W; Foolen, J; Cilingir, A C; Ito, K; van Donkelaar, C C

    2015-03-18

    Soft biological tissues adapt their collagen network to the mechanical environment. Collagen remodeling and cell traction are both involved in this process. The present study presents a collagen adaptation model which includes strain-dependent collagen degradation and contact-guided cell traction. Cell traction is determined by the prevailing collagen structure and is assumed to strive for tensional homeostasis. In addition, collagen is assumed to mechanically fail if it is over-strained. Care is taken to use principally measurable and physiologically meaningful relationships. This model is implemented in a fibril-reinforced biphasic finite element model for soft hydrated tissues. The versatility and limitations of the model are demonstrated by corroborating the predicted transient and equilibrium collagen adaptation under distinct mechanical constraints against experimental observations from the literature. These experiments include overloading of pericardium explants until failure, static uniaxial and biaxial loading of cell-seeded gels in vitro and shortening of periosteum explants. In addition, remodeling under hypothetical conditions is explored to demonstrate how collagen might adapt to small differences in constraints. Typical aspects of all essentially different experimental conditions are captured quantitatively or qualitatively. Differences between predictions and experiments as well as new insights that emerge from the present simulations are discussed. This model is anticipated to evolve into a mechanistic description of collagen adaptation, which may assist in developing load-regimes for functional tissue engineered constructs, or may be employed to improve our understanding of the mechanisms behind physiological and pathological collagen remodeling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Lipoma arborescens: Comparison of typical and atypical disease presentations

    International Nuclear Information System (INIS)

    Howe, B.M.; Wenger, D.E.

    2013-01-01

    Aim: To determine whether the aetiology differed between typical cases of lipoma arborescens with unilateral knee involvement and atypical cases involving joints other than the knee, polyarticular disease, and disease outside of the knee joint. Materials and methods: Cases of lipoma arborescens involving the knee joint were evaluated for the distribution of the disease and severity of degenerative arthritis. Joints other than the knee were evaluated for the presence and severity of degenerative arthritis, and the distribution was classified as either intra-articular, extra-articular, or both. Clinical history was reviewed for patient age at presentation, a history of inflammatory arthritis, diabetes mellitus, and known steroid use. Fisher's exact test was used to determine whether there was a statistically significant difference between typical and atypical presentations of the disease. Results: Lipoma arborescens was identified in 45 joints in 39 patients. Twenty-eight patients were classified as “typical” and 11 patients had “atypical” disease. There was no significant difference in age at presentation, presence of degenerative arthritis, or known inflammatory arthritis when comparing typical and atypical presentations of the disease. Conclusion: Twenty-eight percent of patients in the present study had atypical presentation of lipoma arborescens with multifocal lipoma arborescens or disease in joints other than the knee. There was no significant difference in age at presentation, presence of degenerative arthritis, or known inflammatory arthritis when comparing typical and atypical presentations of the disease. Of the 39 patients, only three had no evidence of degenerative arthritis, which suggests that many cases of lipoma arborescens are secondary to chronic reactive change in association with degenerative arthritis

  7. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    Science.gov (United States)

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Analysis and Comparison of Typical Models within Distribution Network Design

    DEFF Research Database (Denmark)

    Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.

    Efficient and cost effective transportation and logistics plays a vital role in the supply chains of the modern world’s manufacturers. Global distribution of goods is a very complicated matter as it involves many different distinct planning problems. The focus of this presentation is to demonstrate...... a number of important issues which have been identified when addressing the Distribution Network Design problem from a modelling angle. More specifically, we present an analysis of the research which has been performed in utilizing operational research in developing and optimising distribution systems....

  9. SSI response of a typical shear wall structure

    International Nuclear Information System (INIS)

    Johnson, J.J.; Maslenikov, O.R.; Schewe, E.C.

    1985-01-01

    The seismic response of a typical shear structure in a commercial nuclear power plant was investigated for a series of site and foundation conditions using best estimate and design procedures. The structure selected is a part of the Zion AFT complex which is a connected group of reinforced concrete shear wall buildings, typical of nuclear power plant structures. Comparisons between best estimate responses quantified the effects of placing the structure on different sites and founding it in different manners. Calibration factors were developed by comparing simplified SSI design procedure responses to responses calculated by best estimate procedures. Nineteen basic cases were analyzed - each case was analyzed for ten earthquakes targeted to the NRC R.G. 1.60 design response spectra. The structure is a part of the Zion auxiliary-fuel handling turbine building (AFT) complex to the Zion nuclear power plants. (orig./HP)

  10. Parameter estimation for mathematical models of a nongastric H+(Na+)-K+(NH4+)-ATPase

    Science.gov (United States)

    Nadal-Quirós, Mónica; Moore, Leon C.

    2015-01-01

    The role of nongastric H+-K+-ATPase (HKA) in ion homeostasis of macula densa (MD) cells is an open question. To begin to explore this issue, we developed two mathematical models that describe ion fluxes through a nongastric HKA. One model assumes a 1H+:1K+-per-ATP stoichiometry; the other assumes a 2H+:2K+-per-ATP stoichiometry. Both models include Na+ and NH4+ competitive binding with H+ and K+, respectively, a characteristic observed in vitro and in situ. Model rate constants were obtained by minimizing the distance between model and experimental outcomes. Both 1H+(1Na+):1K+(1NH4+)-per-ATP and 2H+(2Na+):2K+(2NH4+)-per-ATP models fit the experimental data well. Using both models, we simulated ion net fluxes as a function of cytosolic or luminal ion concentrations typical for the cortical thick ascending limb and MD region. We observed that 1) K+ and NH4+ flowed in the lumen-to-cytosol direction, 2) there was competitive behavior between luminal K+ and NH4+ and between cytosolic Na+ and H+, 3) ion fluxes were highly sensitive to changes in cytosolic Na+ or H+ concentrations, and 4) the transporter does mostly Na+/K+ exchange under physiological conditions. These results support the concept that nongastric HKA may contribute to Na+ and pH homeostasis in MD cells. Furthermore, in both models, H+ flux reversed at a luminal pH that was <5.6. Such reversal led to Na+/H+ exchange for a luminal pH of <2 and 4 in the 1:1-per-ATP and 2:2-per-ATP models, respectively. This suggests a novel role of nongastric HKA in cell Na+ homeostasis in the more acidic regions of the renal tubules. PMID:26109090

  11. Parameter estimation for mathematical models of a nongastric H+(Na+)-K(+)(NH4+)-ATPase.

    Science.gov (United States)

    Nadal-Quirós, Mónica; Moore, Leon C; Marcano, Mariano

    2015-09-01

    The role of nongastric H(+)-K(+)-ATPase (HKA) in ion homeostasis of macula densa (MD) cells is an open question. To begin to explore this issue, we developed two mathematical models that describe ion fluxes through a nongastric HKA. One model assumes a 1H(+):1K(+)-per-ATP stoichiometry; the other assumes a 2H(+):2K(+)-per-ATP stoichiometry. Both models include Na+ and NH4+ competitive binding with H+ and K+, respectively, a characteristic observed in vitro and in situ. Model rate constants were obtained by minimizing the distance between model and experimental outcomes. Both 1H(+)(1Na(+)):1K(+)(1NH4 (+))-per-ATP and 2H(+)(2Na(+)):2K(+)(2NH4 (+))-per-ATP models fit the experimental data well. Using both models, we simulated ion net fluxes as a function of cytosolic or luminal ion concentrations typical for the cortical thick ascending limb and MD region. We observed that (1) K+ and NH4+ flowed in the lumen-to-cytosol direction, (2) there was competitive behavior between luminal K+ and NH4+ and between cytosolic Na+ and H+, 3) ion fluxes were highly sensitive to changes in cytosolic Na+ or H+ concentrations, and 4) the transporter does mostly Na+ / K+ exchange under physiological conditions. These results support the concept that nongastric HKA may contribute to Na+ and pH homeostasis in MD cells. Furthermore, in both models, H+ flux reversed at a luminal pH that was <5.6. Such reversal led to Na+ / H+ exchange for a luminal pH of <2 and 4 in the 1:1-per-ATP and 2:2-per-ATP models, respectively. This suggests a novel role of nongastric HKA in cell Na+ homeostasis in the more acidic regions of the renal tubules. Copyright © 2015 the American Physiological Society.

  12. Spatial Resolution of the ECE for JET Typical Parameters

    International Nuclear Information System (INIS)

    Tribaldos, V.

    2000-01-01

    The purpose of this report is to obtain estimations of the spatial resolution of the electron cyclotron emission (ECE) phenomena for the typical plasmas found in JET tokamak. The analysis of the spatial resolution of the ECE is based on the underlying physical process of emission and a working definition is presented and discussed. In making these estimations a typical JET pulse is being analysed taking into account the magnetic configuration, the density and temperature profiles, obtained with the EFIT code and from the LIDAR diagnostic. Ray tracing simulations are performed for a Maxwellian plasma taking into account the antenna pattern. (Author) 5 refs

  13. WE-D-BRE-04: Modeling Optimal Concurrent Chemotherapy Schedules

    International Nuclear Information System (INIS)

    Jeong, J; Deasy, J O

    2014-01-01

    Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-kill was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation

  14. The Brain’s sense of walking: a study on the intertwine between locomotor imagery and internal locomotor models in healthy adults, typically developing children and children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Marco eIosa

    2014-10-01

    Full Text Available Motor imagery and internal motor models have been deeply investigated in literature. It is well known that the development of motor imagery occurs during adolescence and it is limited in people affected by cerebral palsy. However, the roles of motor imagery and internal models in locomotion as well as their intertwine received poor attention. In this study we compared the performances of healthy adults (n=8, 28.1±5.1 years old, children with typical development (n=8, 8.1±3.8 years old and children with cerebral palsy (n=12, 7.5±2.9 years old, measured by an optoelectronic system and a trunk-mounted wireless inertial magnetic unit, during three different tasks. Subjects were asked to achieve a target located at 2 or 3m in front of them simulating their walking by stepping in place, or actually walking blindfolded or normally walking with open eyes. Adults performed a not significantly different number of steps (p=0.761 spending not significantly different time between tasks (p=0.156. Children with typical development showed task-dependent differences both in terms of number of steps (p=0.046 and movement time (p=0.002. However, their performance in simulated and blindfolded walking were strictly correlated (R=0.871 for steps, R=0.673 for time. Further, their error in blindfolded walking was in mean only of -2.2% of distance. Also children with cerebral palsy showed significant differences in number of steps (p=0.022 and time (p<0.001, but neither their number of steps nor their movement time recorded during simulated walking were found correlated with those of blindfolded and normal walking. Adults used a unique strategy among different tasks. Children with typical development seemed to be less reliable on their motor predictions, using a task-dependent strategy probably more reliable on sensorial feedback. Children with cerebral palsy showed less efficient performances, especially in simulated walking, suggesting an altered locomotor imagery.

  15. Nonlinear transfer of elements from soil to plants: impact on radioecological modeling

    Energy Technology Data Exchange (ETDEWEB)

    Tuovinen, Tiina S.; Kolehmainen, Mikko; Roivainen, Paeivi; Kumlin, Timo; Makkonen, Sari; Holopainen, Toini; Juutilainen, Jukka [University of Eastern Finland, Department of Environmental and Biological Sciences, P.O. Box 1627, Kuopio (Finland)

    2016-08-15

    In radioecology, transfer of radionuclides from soil to plants is typically described by a concentration ratio (CR), which assumes linearity of transfer with soil concentration. Nonlinear uptake is evidenced in many studies, but it is unclear how it should be taken into account in radioecological modeling. In this study, a conventional CR-based linear model, a nonlinear model derived from observed uptake into plants, and a new simple model based on the observation that nonlinear uptake leads to a practically constant concentration in plant tissues are compared. The three models were used to predict transfer of {sup 234}U, {sup 59}Ni and {sup 210}Pb into spruce needles. The predictions of the nonlinear and the new model were essentially similar. In contrast, plant radionuclide concentration was underestimated by the linear model when the total element concentration in soil was relatively low, but within the range commonly observed in nature. It is concluded that the linear modeling could easily be replaced by a new approach that more realistically reflects the true processes involved in the uptake of elements into plants. The new modeling approach does not increase the complexity of modeling in comparison with CR-based linear models, and data needed for model parameters (element concentrations) are widely available. (orig.)

  16. Nonlinear analyses and failure patterns of typical masonry school buildings in the epicentral zone of the 2016 Italian earthquakes

    Science.gov (United States)

    Clementi, Cristhian; Clementi, Francesco; Lenci, Stefano

    2017-11-01

    The paper discusses the behavior of typical masonry school buildings in the center of Italy built at the end of 1950s without any seismic guidelines. These structures have faced the recent Italian earthquakes in 2016 without diffuse damages. Global numerical models of the building have been built and masonry material has been simulated as nonlinear. Sensitivity analyses are done to evaluate the reliability of the structural models.

  17. Detection of GNSS Signals Propagation in Urban Canyos Using 3D City Models

    Directory of Open Access Journals (Sweden)

    Petra Pisova

    2015-01-01

    Full Text Available This paper presents one of the solutions to the problem of multipath propagation and effects on Global Navigation Satellite Systems (GNSS signals in urban canyons. GNSS signals may reach a receiver not only through Line-of-Sight (LOS paths, but they are often blocked, reflected or diffracted from tall buildings, leading to unmodelled GNSS errors in position estimation. Therefore in order to detect and mitigate the impact of multipath, a new ray-tracing model for simulation of GNSS signals reception in urban canyons is proposed - based on digital 3D maps information, known positions of GNSS satellites and an assumed position of a receiver. The model is established and validated using experimental, as well as real data. It is specially designed for complex environments and situations where positioning with highest accuracy is required - a typical example is navigation for blind people.

  18. Spatial Variation of Soil Type and Soil Moisture in the Regional Atmospheric Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R.

    2001-06-27

    Soil characteristics (texture and moisture) are typically assumed to be initially constant when performing simulations with the Regional Atmospheric Modeling System (RAMS). Soil texture is spatially homogeneous and time-independent, while soil moisture is often spatially homogeneous initially, but time-dependent. This report discusses the conversion of a global data set of Food and Agriculture Organization (FAO) soil types to RAMS soil texture and the subsequent modifications required in RAMS to ingest this information. Spatial variations in initial soil moisture obtained from the National Center for Environmental Predictions (NCEP) large-scale models are also introduced. Comparisons involving simulations over the southeastern United States for two different time periods, one during warmer, more humid summer conditions, and one during cooler, dryer winter conditions, reveals differences in surface conditions related to increases or decreases in near-surface atmospheric moisture con tent as a result of different soil properties. Three separate simulation types were considered. The base case assumed spatially homogeneous soil texture and initial soil moisture. The second case assumed variable soil texture and constant initial soil moisture, while the third case allowed for both variable soil texture and initial soil moisture. The simulation domain was further divided into four geographically distinct regions. It is concluded there is a more dramatic impact on thermodynamic variables (surface temperature and dewpoint) than on surface winds, and a more pronounced variability in results during the summer period. While no obvious trends in surface winds or dewpoint temperature were found relative to observations covering all regions and times, improvement in surface temperatures in most regions and time periods was generally seen with the incorporation of variable soil texture and initial soil moisture.

  19. 41 CFR 302-10.206 - May my agency assume direct responsibility for the costs of preparing and transporting my mobile...

    Science.gov (United States)

    2010-07-01

    ... direct responsibility for the costs of preparing and transporting my mobile home? 302-10.206 Section 302... ALLOWANCES TRANSPORTATION AND STORAGE OF PROPERTY 10-ALLOWANCES FOR TRANSPORTATION OF MOBILE HOMES AND BOATS... responsibility for the costs of preparing and transporting my mobile home? Yes, your agency may assume direct...

  20. Impact of typical rather than nutrient-dense food choices in the US Department of Agriculture Food Patterns.

    Science.gov (United States)

    Britten, Patricia; Cleveland, Linda E; Koegel, Kristin L; Kuczynski, Kevin J; Nickols-Richardson, Sharon M

    2012-10-01

    The US Department of Agriculture (USDA) Food Patterns, released as part of the 2010 Dietary Guidelines for Americans, are designed to meet nutrient needs without exceeding energy requirements. They identify amounts to consume from each food group and recommend that nutrient-dense forms-lean or low-fat, without added sugars or salt-be consumed. Americans fall short of most food group intake targets and do not consume foods in nutrient-dense forms. Intake of calories from solid fats and added sugars exceed maximum limits by large margins. Our aim was to determine the potential effect on meeting USDA Food Pattern nutrient adequacy and moderation goals if Americans consumed the recommended quantities from each food group, but did not implement the advice to select nutrient-dense forms of food and instead made more typical food choices. Food-pattern modeling analysis using the USDA Food Patterns, which are structured to allow modifications in one or more aspects of the patterns, was used. Nutrient profiles for each food group were modified by replacing each nutrient-dense representative food with a similar but typical choice. Typical nutrient profiles were used to determine the energy and nutrient content of the food patterns. Moderation goals are not met when amounts of food in the USDA Food Patterns are followed and typical rather than nutrient-dense food choices are made. Energy, total fat, saturated fat, and sodium exceed limits in all patterns, often by substantial margins. With typical choices, calories were 15% to 30% (ie, 350 to 450 kcal) above the target calorie level for each pattern. Adequacy goals were not substantially affected by the use of typical food choices. If consumers consume the recommended quantities from each food group and subgroup, but fail to choose foods in low-fat, no-added-sugars, and low-sodium forms, they will not meet the USDA Food Patterns moderation goals or the 2010 Dietary Guidelines for Americans. Copyright © 2012 Academy of

  1. Object selection costs in visual working memory: A diffusion model analysis of the focus of attention.

    Science.gov (United States)

    Sewell, David K; Lilburn, Simon D; Smith, Philip L

    2016-11-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016

  2. Multi-state Markov models for disease progression in the presence of informative examination times: an application to hepatitis C.

    Science.gov (United States)

    Sweeting, M J; Farewell, V T; De Angelis, D

    2010-05-20

    In many chronic diseases it is important to understand the rate at which patients progress from infection through a series of defined disease states to a clinical outcome, e.g. cirrhosis in hepatitis C virus (HCV)-infected individuals or AIDS in HIV-infected individuals. Typically data are obtained from longitudinal studies, which often are observational in nature, and where disease state is observed only at selected examinations throughout follow-up. Transition times between disease states are therefore interval censored. Multi-state Markov models are commonly used to analyze such data, but rely on the assumption that the examination times are non-informative, and hence the examination process is ignorable in a likelihood-based analysis. In this paper we develop a Markov model that relaxes this assumption through the premise that the examination process is ignorable only after conditioning on a more regularly observed auxiliary variable. This situation arises in a study of HCV disease progression, where liver biopsies (the examinations) are sparse, irregular, and potentially informative with respect to the transition times. We use additional information on liver function tests (LFTs), commonly collected throughout follow-up, to inform current disease state and to assume an ignorable examination process. The model developed has a similar structure to a hidden Markov model and accommodates both the series of LFT measurements and the partially latent series of disease states. We show through simulation how this model compares with the commonly used ignorable Markov model, and a Markov model that assumes the examination process is non-ignorable. Copyright 2010 John Wiley & Sons, Ltd.

  3. Typicality effects in artificial categories: is there a hemisphere difference?

    Science.gov (United States)

    Richards, L G; Chiarello, C

    1990-07-01

    In category classification tasks, typicality effects are usually found: accuracy and reaction time depend upon distance from a prototype. In this study, subjects learned either verbal or nonverbal dot pattern categories, followed by a lateralized classification task. Comparable typicality effects were found in both reaction time and accuracy across visual fields for both verbal and nonverbal categories. Both hemispheres appeared to use a similarity-to-prototype matching strategy in classification. This indicates that merely having a verbal label does not differentiate classification in the two hemispheres.

  4. Typical Periods for Two-Stage Synthesis by Time-Series Aggregation with Bounded Error in Objective Function

    Energy Technology Data Exchange (ETDEWEB)

    Bahl, Björn; Söhler, Theo; Hennen, Maike; Bardow, André, E-mail: andre.bardow@ltt.rwth-aachen.de [Institute of Technical Thermodynamics, RWTH Aachen University, Aachen (Germany)

    2018-01-08

    Two-stage synthesis problems simultaneously consider here-and-now decisions (e.g., optimal investment) and wait-and-see decisions (e.g., optimal operation). The optimal synthesis of energy systems reveals such a two-stage character. The synthesis of energy systems involves multiple large time series such as energy demands and energy prices. Since problem size increases with the size of the time series, synthesis of energy systems leads to complex optimization problems. To reduce the problem size without loosing solution quality, we propose a method for time-series aggregation to identify typical periods. Typical periods retain the chronology of time steps, which enables modeling of energy systems, e.g., with storage units or start-up cost. The aim of the proposed method is to obtain few typical periods with few time steps per period, while accurately representing the objective function of the full time series, e.g., cost. Thus, we determine the error of time-series aggregation as the cost difference between operating the optimal design for the aggregated time series and for the full time series. Thereby, we rigorously bound the maximum performance loss of the optimal energy system design. In an initial step, the proposed method identifies the best length of typical periods by autocorrelation analysis. Subsequently, an adaptive procedure determines aggregated typical periods employing the clustering algorithm k-medoids, which groups similar periods into clusters and selects one representative period per cluster. Moreover, the number of time steps per period is aggregated by a novel clustering algorithm maintaining chronology of the time steps in the periods. The method is iteratively repeated until the error falls below a threshold value. A case study based on a real-world synthesis problem of an energy system shows that time-series aggregation from 8,760 time steps to 2 typical periods with each 2 time steps results in an error smaller than the optimality gap of

  5. Computational Models of Rock Failure

    Science.gov (United States)

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  6. Rural Tourism and Local Development: Typical Productions of Lazio

    Directory of Open Access Journals (Sweden)

    Francesco Maria Olivieri

    2014-12-01

    Full Text Available The local development is based on the integration of the tourism sector with the whole economy. The rural tourism seems to be a good occasion to analyse the local development: consumption of "tourist products" located in specific local contexts. Starting from the food and wine supply chain and the localization of typical productions, the aim of the present work will be analyse the relationship with local development, rural tourism sustainability and accommodation system, referring to Lazio. Which are the findings to create tourism local system based on the relationship with touristic and food and wine supply chain? Italian tourism is based on accommodation system, so the whole consideration of the Italian cultural tourism: tourism made in Italy. The touristic added value to specific local context takes advantage from the synergy with food and wine supply chain: made in Italy of typical productions. Agritourism could be better accommodation typology to rural tourism and to exclusivity of consumption typical productions. The reciprocity among food and wine supply chain and tourism provides new insights on the key topics related to tourism development and to the organization of geographical space as well and considering its important contribution nowadays to the economic competitiveness.

  7. Satellite Magnetic Residuals Investigated With Geostatistical Methods

    DEFF Research Database (Denmark)

    Fox Maule, Chaterine; Mosegaard, Klaus; Olsen, Nils

    2005-01-01

    (which consists of measurement errors and unmodeled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyze the residuals of the Oersted (09d/04) field model (www.dsri.dk/Oersted/Field models/IGRF 2005 candidates/), which is based...

  8. Breast Metastases from Extramammary Malignancies: Typical and Atypical Ultrasound Features

    Energy Technology Data Exchange (ETDEWEB)

    Mun, Sung Hee [Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 135-710 (Korea, Republic of); Department of Radiology, Catholic University of Daegu College of Medicine, Daegu 712-702 (Korea, Republic of); Ko, Eun Young; Han, Boo-Kyung; Shin, Jung Hee [Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 135-710 (Korea, Republic of); Kim, Suk Jung [Department of Radiology, Inje University College of Medicine, Busan Paik Hospital, Busan 614-735 (Korea, Republic of); Cho, Eun Yoon [Department of Pathology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 135-710 (Korea, Republic of)

    2014-07-01

    Breast metastases from extramammary malignancies are uncommon. The most common sources are lymphomas/leukemias and melanomas. Some of the less common sources include carcinomas of the lung, ovary, and stomach, and infrequently, carcinoid tumors, hypernephromas, carcinomas of the liver, tonsil, pleura, pancreas, cervix, perineum, endometrium and bladder. Breast metastases from extramammary malignancies have both hematogenous and lymphatic routes. According to their routes, there are common radiological features of metastatic diseases of the breast, but the features are not specific for metastases. Typical ultrasound (US) features of hematogenous metastases include single or multiple, round to oval shaped, well-circumscribed hypoechoic masses without spiculations, calcifications, or architectural distortion; these masses are commonly located superficially in subcutaneous tissue or immediately adjacent to the breast parenchyma that is relatively rich in blood supply. Typical US features of lymphatic breast metastases include diffusely and heterogeneously increased echogenicities in subcutaneous fat and glandular tissue and a thick trabecular pattern with secondary skin thickening, lymphedema, and lymph node enlargement. However, lesions show variable US features in some cases, and differentiation of these lesions from primary breast cancer or from benign lesions is difficult. In this review, we demonstrate various US appearances of breast metastases from extramammary malignancies as typical and atypical features, based on the results of US and other imaging studies performed at our institution. Awareness of the typical and atypical imaging features of these lesions may be helpful to diagnose metastatic lesions of the breast.

  9. Breast Metastases from Extramammary Malignancies: Typical and Atypical Ultrasound Features

    International Nuclear Information System (INIS)

    Mun, Sung Hee; Ko, Eun Young; Han, Boo-Kyung; Shin, Jung Hee; Kim, Suk Jung; Cho, Eun Yoon

    2014-01-01

    Breast metastases from extramammary malignancies are uncommon. The most common sources are lymphomas/leukemias and melanomas. Some of the less common sources include carcinomas of the lung, ovary, and stomach, and infrequently, carcinoid tumors, hypernephromas, carcinomas of the liver, tonsil, pleura, pancreas, cervix, perineum, endometrium and bladder. Breast metastases from extramammary malignancies have both hematogenous and lymphatic routes. According to their routes, there are common radiological features of metastatic diseases of the breast, but the features are not specific for metastases. Typical ultrasound (US) features of hematogenous metastases include single or multiple, round to oval shaped, well-circumscribed hypoechoic masses without spiculations, calcifications, or architectural distortion; these masses are commonly located superficially in subcutaneous tissue or immediately adjacent to the breast parenchyma that is relatively rich in blood supply. Typical US features of lymphatic breast metastases include diffusely and heterogeneously increased echogenicities in subcutaneous fat and glandular tissue and a thick trabecular pattern with secondary skin thickening, lymphedema, and lymph node enlargement. However, lesions show variable US features in some cases, and differentiation of these lesions from primary breast cancer or from benign lesions is difficult. In this review, we demonstrate various US appearances of breast metastases from extramammary malignancies as typical and atypical features, based on the results of US and other imaging studies performed at our institution. Awareness of the typical and atypical imaging features of these lesions may be helpful to diagnose metastatic lesions of the breast

  10. A Typical Model Audit Approach: Spreadsheet Audit Methodologies in the City of London

    OpenAIRE

    Croll, Grenville J.

    2007-01-01

    Spreadsheet audit and review procedures are an essential part of almost all City of London financial transactions. Structured processes are used to discover errors in large financial spreadsheets underpinning major transactions of all types. Serious errors are routinely found and are fed back to model development teams generally under conditions of extreme time urgency. Corrected models form the essence of the completed transaction and firms undertaking model audit and review expose themselve...

  11. Identifying the Dimensionality of Oral Language Skills of Children With Typical Development in Preschool Through Fifth Grade.

    Science.gov (United States)

    Lonigan, Christopher J; Milburn, Trelani F

    2017-08-16

    Language is a multidimensional construct from prior to the beginning of formal schooling to near the end of elementary school. The primary goals of this study were to identify the dimensionality of language and to determine whether this dimensionality was consistent in children with typical language development from preschool through 5th grade. In a large sample of 1,895 children, confirmatory factor analysis was conducted with 19-20 measures of language intended to represent 6 factors, including domains of vocabulary and syntax/grammar across modalities of expressive and receptive language, listening comprehension, and vocabulary depth. A 2-factor model with separate, highly correlated vocabulary and syntax factors provided the best fit to the data, and this model of language dimensionality was consistent from preschool through 5th grade. This study found that there are fewer dimensions than are often suggested or represented by the myriad subtests in commonly used standardized tests of language. The identified 2-dimensional (vocabulary and syntax) model of language has significant implications for the conceptualization and measurement of the language skills of children in the age range from preschool to 5th grade, including the study of typical and atypical language development, the study of the developmental and educational influences of language, and classification and intervention in clinical practice. https://doi.org/10.23641/asha.5154220.

  12. The Development and Application of an Integrated VAR Process Model

    Science.gov (United States)

    Ballantyne, A. Stewart

    2016-07-01

    The VAR ingot has been the focus of several modelling efforts over the years with the result that the thermal regime in the ingot can be simulated quite realistically. Such models provide important insight into solidification of the ingot but present some significant challenges to the casual user such as a process engineer. To provide the process engineer with a tool to assist in the development of a melt practice, a comprehensive model of the complete VAR process has been developed. A radiation heat transfer simulation of the arc has been combined with electrode and ingot models to develop a platform which accepts typical operating variables (voltage, current, and gap) together with process parameters (electrode size, crucible size, orientation, water flow, etc.) as input data. The output consists of heat flow distributions and solidification parameters in the form of text, comma-separated value, and visual toolkit files. The resulting model has been used to examine the relationship between the assumed energy distribution in the arc and the actual energy flux which arrives at the ingot top surface. Utilizing heat balance information generated by the model, the effects of electrode-crucible orientation and arc gap have been explored with regard to the formation of ingot segregation defects.

  13. Integrated SNG Production in a Typical Nordic Sawmill

    Directory of Open Access Journals (Sweden)

    Sennai Mesfun

    2016-04-01

    Full Text Available Advanced biomass-based motor fuels and chemicals are becoming increasingly important to replace fossil energy sources within the coming decades. It is likely that the new biorefineries will evolve mainly from existing forest industry sites, as they already have the required biomass handling infrastructure in place. The main objective of this work is to assess the potential for increasing the profit margin from sawmill byproducts by integrating innovative downstream processes. The focus is on the techno-economic evaluation of an integrated site for biomass-based synthetic natural gas (bio-SNG production. The option of using the syngas in a biomass-integrated gasification combined cycle (b-IGCC for the production of electricity (instead of SNG is also considered for comparison. The process flowsheets that are used to analyze the energy and material balances are modelled in MATLAB and Simulink. A mathematical process integration model of a typical Nordic sawmill is used to analyze the effects on the energy flows in the overall site, as well as to evaluate the site economics. Different plant sizes have been considered in order to assess the economy-of-scale effect. The technical data required as input are collected from the literature and, in some cases, from experiments. The investment cost is evaluated on the basis of conducted studies, third party supplier budget quotations and in-house database information. This paper presents complete material and energy balances of the considered processes and the resulting process economics. Results show that in order for the integrated SNG production to be favored, depending on the sawmill size, a biofuel subsidy in the order of 28–52 €/MWh SNG is required.

  14. Reconstructing ATLAS SU3 in the CMSSM and relaxed phenomenological supersymmetry models

    CERN Document Server

    Fowlie, Andrew

    2011-01-01

    Assuming that the LHC makes a positive end-point measurement indicative of low-energy supersymmetry, we examine the prospects of reconstructing the parameter values of a typical low-mass point in the framework of the Constrained MSSM and in several other supersymmetry models that have more free parameters and fewer assumptions than the CMSSM. As a case study, we consider the ATLAS SU3 benchmark point with a Bayesian approach and with a Gaussian approximation to the likelihood for the measured masses and mass differences. First we investigate the impact of the hypothetical ATLAS measurement alone and show that it significantly narrows the confidence intervals of relevant, otherwise fairly unrestricted, model parameters. Next we add information about the relic density of neutralino dark matter to the likelihood and show that this further narrows the confidence intervals. We confirm that the CMSSM has the best prospects for parameter reconstruction; its results had little dependence on our choice of prior, in co...

  15. Simulation of maximum light use efficiency for some typical vegetation types in China

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Maximum light use efficiency (εmax) is a key parameter for the estimation of net primary productivity (NPP) derived from remote sensing data. There are still many divergences about its value for each vegetation type. The εmax for some typical vegetation types in China is simulated using a modified least squares function based on NOAA/AVHRR remote sensing data and field-observed NPP data. The vegetation classification accuracy is introduced to the process. The sensitivity analysis of εmax to vegetation classification accuracy is also conducted. The results show that the simulated values of εmax are greater than the value used in CASA model, and less than the values simulated with BIOME-BGC model. This is consistent with some other studies. The relative error of εmax resulting from classification accuracy is -5.5%―8.0%. This indicates that the simulated values of εmax are reliable and stable.

  16. Analysis and Comparison on the Flood Simulation in Typical Hilly & Semi-mountainous Region

    Science.gov (United States)

    Luan, Qinghua; Wang, Dong; Zhang, Xiang; Liu, Jiahong; Fu, Xiaoran; Zhang, Kun; Ma, Jun

    2017-12-01

    Water-logging and flood are both serious in hilly and semi-mountainous cities of China, but the related research is rare. Lincheng Economic Development Zone (EDZ) in Hebei Province as the typical city was selected and storm water management model (SWMM) was applied for flood simulation in this study. The regional model was constructed through calibrating and verifying the runoff coefficient of different flood processes. Different designed runoff processes in five-year, ten-year and twenty-year return periods in basic scenario and in the low impact development (LID) scenario, respectively, were simulated and compared. The result shows that: LID measures have effect on peak reduction in the study area, but the effectiveness is not significant; the effectiveness of lagging peak time is poor. These simulation results provide decision support for the rational construction of LID in the study area, and provide the references for regional rain flood management.

  17. Vertical structure of currents in Algeciras Bay (Strait of Gibraltar): implications on oil spill modeling under different typical scenarios

    Science.gov (United States)

    Megías Trujillo, Bárbara; Caballero de Frutos, Isabel; López Comi, Laura; Tejedor Alvarez, Begoña.; Izquierdo González, Alfredo; Gonzales Mejías, Carlos Jose; Alvarez Esteban, Óscar; Mañanes Salinas, Rafael; Comerma, Eric

    2010-05-01

    Algeciras Bay constitutes a physical environment of special characteristics, due to its bathymetric configuration and geographical location, at the eastern boundary of the Strait of Gibraltar. Hence, the Bay is subject to the complex hydrodynamics of the Strait of Gibraltar, characterized by a mesotidal, semidiurnal regime and the high density-stratification of the water column due to the presence of the upper Atlantic and the lower Mediterranean (more salty and cold) water layers. In addition, this environment is affected by powerful Easterly and Westerly winds episodes. The intense maritime traffic of oil tankers sailing across the Strait and inside the Bay, together with the presence of an oil refinery at its northern coast, imply high risks of oil spilling inside these waters, and unfortunately it has constituted a matter of usual occurrence through the last decades. The above paragraph clearly manifests the necessity of a detailed knowledge on the Bay's hydrodynamics, and the related system of currents, for a correct management and contingency planning in case of oil spilling in this environment. In order to evaluate the range of affectation of oil spills in the Bay's waters and coasts, the OILMAP oil spill model was used, the currents fields being provided by the three-dimensional, nonlinear, finite-differences, sigma-coordinates, UCA 3D hydrodynamic model. Numerical simulations were carried out for a grid domain extended from the western Strait boundary to the Alboran Sea, having a horizontal spatial resolution of 500 m and 50 sigma-levels in the vertical dimension. The system was forced by the tidal constituents M2 (main semidiurnal) and Z0 (constant or zero-frequency), considering three different typical wind conditions: Easterlies, Westerlies and calm (no wind). The most remarkable results from the numerical 3D simulations of Algeciras Bay's hydrodynamics were: a) the occurrence of opposite tidal currents between the upper Atlantic and lower Mediterranean

  18. Typical event horizons in AdS/CFT

    Energy Technology Data Exchange (ETDEWEB)

    Avery, Steven G.; Lowe, David A. [Department of Physics, Brown University,Providence, RI 02912 (United States)

    2016-01-14

    We consider the construction of local bulk operators in a black hole background dual to a pure state in conformal field theory. The properties of these operators in a microcanonical ensemble are studied. It has been argued in the literature that typical states in such an ensemble contain firewalls, or otherwise singular horizons. We argue this conclusion can be avoided with a proper definition of the interior operators.

  19. Typical event horizons in AdS/CFT

    Science.gov (United States)

    Avery, Steven G.; Lowe, David A.

    2016-01-01

    We consider the construction of local bulk operators in a black hole background dual to a pure state in conformal field theory. The properties of these operators in a microcanonical ensemble are studied. It has been argued in the literature that typical states in such an ensemble contain firewalls, or otherwise singular horizons. We argue this conclusion can be avoided with a proper definition of the interior operators.

  20. Type of milk typically consumed, and stated preference, but not health consciousness affect revealed preferences for fat in milk.

    Science.gov (United States)

    Bakke, Alyssa J; Shehan, Catherine V; Hayes, John E

    2016-04-01

    Fat is an important source of both pleasure and calories in the diet. Dairy products are a major source of fat in the diet, and understanding preferences for fat in fluid milk can potentially inform efforts to change fat consumption patterns or optimize consumer products. Here, patterns of preference for fat in milk were determined in the laboratory among 100 free living adults using rejection thresholds. Participants also answered questions relating to their health concerns, the type of fluid milk typically consumed, and their declared preference for type of milk (in terms of fat level). When revealed preferences in blind tasting were stratified by these measures, we observed striking differences in the preferred level of fat in milk. These data indicate a non-trivial number of consumers who prefer low-fat milk to full fat milk, a pattern that would have been overshadowed by the use of a group mean. While it is widely assumed and claimed that increasing fat content in fluid milk universally increases palatability, present data demonstrate this is not true for a segment of the population. These results underscore the need to go look beyond group means to understand individual differences in food preference.

  1. Typical performance of regular low-density parity-check codes over general symmetric channels

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Toshiyuki [Department of Electronics and Information Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397 (Japan); Saad, David [Neural Computing Research Group, Aston University, Aston Triangle, Birmingham B4 7ET (United Kingdom)

    2003-10-31

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.

  2. Typical performance of regular low-density parity-check codes over general symmetric channels

    International Nuclear Information System (INIS)

    Tanaka, Toshiyuki; Saad, David

    2003-01-01

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models

  3. Measurements of VOC adsorption/desorption characteristics of typical interior building materials

    Energy Technology Data Exchange (ETDEWEB)

    An, Y.; Zhang, J.S.; Shaw, C.Y.

    2000-07-01

    The adsorption/desorption of volatile organic compounds (VOCs) on interior building material surfaces (i.e., the sink effect) can affect the VOC concentrations in a building, and thus need to be accounted for an indoor air quality (IAQ) prediction model. In this study, the VOC adsorption/desorption characteristics (sink effect) were measured for four typical interior building materials including carpet, vinyl floor tile, painted drywall, and ceiling tile. The VOCs tested were ethylbenzene, cyclohexanone, 1,4-dichlorobenzene, benzaldehyde, and dodecane. These five VOCs were selected because they are representative of hydrocarbons, aromatics, ketones, aldehydes, and chlorine substituted compounds. The first order reversible adsorption/desorption model was based on the Langmuir isotherm was used to analyze the data and to determine the equilibrium constant of each VOC-material combination. It was found that the adsorption/desorption equilibrium constant, which is a measure of the sink capacity, increased linearly with the inverse of the VOC vapor pressure. For each compound, the adsorption/desorption equilibrium constant, and the adsorption rate constant differed significantly among the four materials tested. A detailed characterization of the material structure in the micro-scale would improve the understanding and modeling of the sink effect in the future. The results of this study can be used to estimate the impact of sink effect on the VOC concentrations in buildings.

  4. 29 CFR 780.210 - The typical hatchery operations constitute “agriculture.”

    Science.gov (United States)

    2010-07-01

    ... EXEMPTIONS APPLICABLE TO AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Agriculture as It Relates to Specific Situations Hatchery Operations § 780.210 The typical hatchery operations constitute “agriculture.” As stated in § 780.127, the typical hatchery...

  5. Using Typical Infant Development to Inform Music Therapy with Children with Disabilities

    Science.gov (United States)

    Wheeler, Barbara L.; Stultz, Sylvia

    2008-01-01

    This article illustrates some ways in which observations of typically-developing infants can inform music therapy and other work with children with disabilities. The research project that is described examines typical infant development with special attention to musical relatedness and communication. Videotapes of sessions centering on musical…

  6. Vocabulary of preschool children with typical language development and socioeducational variables.

    Science.gov (United States)

    Moretti, Thaís Cristina da Freiria; Kuroishi, Rita Cristina Sadako; Mandrá, Patrícia Pupin

    2017-03-09

    To investigate the correlation between age, socioeconomic status (SES), and performance on emissive and receptive vocabulary tests in children with typical language development. The study sample was composed of 60 preschool children of both genders, aged 3 years to 5 years 11 months, with typical language development divided into three groups: G I (mean age=3 years 6 months), G II (mean age=4 years 4 months) and G III (mean age=5 years 9 months). The ABFW Child Language Test - Vocabulary and the Peabody Picture Vocabulary Test (PPVT) for emissive and receptive language were applied to the preschoolers. The socioeconomic classification questionnaire of the Brazilian Association of Survey Companies (ABEP) was applied to the preschoolers' parents/legal guardians. Data were analyzed according to the criteria of the aforementioned instruments and were arranged in Excel spreadsheet for Windows XP®. A multiple linear regression model was used, adopting a statistical significance level of 5%, to analyze the correlation between age, SES, and performance on the receptive and emissive vocabulary tests. In the ABEP questionnaire, participants were classified mostly into social level C (63.3%), followed by levels B (26.6%) and D (10%). The preschoolers investigated presented emissive and receptive vocabulary adequate for the age groups. No statistically significant difference was found for the variables age and SES regarding emissive and receptive vocabulary. Higher test scores were observed with increased age and SES, for social levels "B" compared with "D" and for "C" with "D". The variables age and socioeconomic status influenced the performance on emissive and receptive vocabulary tests in the study group.

  7. Sample diversity and premise typicality in inductive reasoning: evidence for developmental change.

    Science.gov (United States)

    Rhodes, Marjorie; Brickman, Daniel; Gelman, Susan A

    2008-08-01

    Evaluating whether a limited sample of evidence provides a good basis for induction is a critical cognitive task. We hypothesized that whereas adults evaluate the inductive strength of samples containing multiple pieces of evidence by attending to the relations among the exemplars (e.g., sample diversity), six-year-olds would attend to the degree to which each individual exemplar in a sample independently appears informative (e.g., premise typicality). To test these hypotheses, participants were asked to select between diverse and non-diverse samples to help them learn about basic-level animal categories. Across various between-subject conditions (N=133), we varied the typicality present in the diverse and non-diverse samples. We found that adults reliably selected to examine diverse over non-diverse samples, regardless of exemplar typicality, six-year-olds preferred to examine samples containing typical exemplars, regardless of sample diversity, and nine-year-olds were somewhat in the midst of this developmental transition.

  8. The quantization of the attention function under a Bayes information theoretic model

    International Nuclear Information System (INIS)

    Wynn, H.P.; Sebastiani, P.

    2001-01-01

    Bayes experimental design using entropy, or equivalently negative information, as a criterion is fairly well developed. The present work applies this model but at a primitive level in statistical sampling. It is assumed that the observer/experimentor is allowed to place a window over the support of a sampling distribution and only 'pay for' observations that fall in this window. The window can be modeled with an 'attention function', simply the indicator function of the window. The understanding is that the cost of the experiment is only the number of paid for observations: n. For fixed n and under the information model it turns out that for standard problems the optimal structure for the window, in the limit amongst all types of window including disjoint regions, is discrete. That is to say it is optimal to observe the world (in this sense) through discrete slits. It also shows that in this case Bayesians with different priors will receive different samples because typically the optimal attention windows will be disjoint. This property we refer to as the quantization of the attention function

  9. Prospective memory deficits in illicit polydrug users are associated with the average long-term typical dose of ecstasy typically consumed in a single session.

    Science.gov (United States)

    Gallagher, Denis T; Hadjiefthyvoulou, Florentia; Fisk, John E; Montgomery, Catharine; Robinson, Sarita J; Judge, Jeannie

    2014-01-01

    Neuroimaging evidence suggests that ecstasy-related reductions in SERT densities relate more closely to the number of tablets typically consumed per session rather than estimated total lifetime use. To better understand the basis of drug related deficits in prospective memory (p.m.) we explored the association between p.m. and average long-term typical dose and long-term frequency of use. Study 1: Sixty-five ecstasy/polydrug users and 85 nonecstasy users completed an event-based, a short-term and a long-term time-based p.m. task. Study 2: Study 1 data were merged with outcomes on the same p.m. measures from a previous study creating a combined sample of 103 ecstasy/polydrug users, 38 cannabis-only users, and 65 nonusers of illicit drugs. Study 1: Ecstasy/polydrug users had significant impairments on all p.m. outcomes compared with nonecstasy users. Study 2: Ecstasy/polydrug users were impaired in event-based p.m. compared with both other groups and in long-term time-based p.m. compared with nonillicit drug users. Both drug using groups did worse on the short-term time-based p.m. task compared with nonusers. Higher long-term average typical dose of ecstasy was associated with poorer performance on the event and short-term time-based p.m. tasks and accounted for unique variance in the two p.m. measures over and above the variance associated with cannabis and cocaine use. The typical ecstasy dose consumed in a single session is an important predictor of p.m. impairments with higher doses reflecting increasing tolerance giving rise to greater p.m. impairment.

  10. Dysphonia Severity Index in Typically Developing Indian Children.

    Science.gov (United States)

    Pebbili, Gopi Kishore; Kidwai, Juhi; Shabnam, Srushti

    2017-01-01

    Dysphonia is a variation in an individual's quality, pitch, or loudness from the voice characteristics typical of a speaker of similar age, gender, cultural background, and geographic location. Dysphonia Severity Index (DSI) is a recognized assessment tool based on a weighted combination of maximum phonation time, highest frequency, lowest intensity, and jitter (%) of an individual. Although dysphonia in adults is accurately evaluated using DSI, standard reference values for school-age children have not been studied. This study aims to document the DSI scores in typically developing children (8-12 years). A total of 42 typically developing children (8-12 years) without complaint of voice problem on the day of testing participated in the study. DSI was computed by substituting the raw scores of substituent parameters: maximum phonation time, highest frequency, lowest intensity, and jitter% using various modules of CSL 4500 software. The average DSI values obtained in children were 2.9 (1.23) and 3.8 (1.29) for males and females, respectively. DSI values are found to be significantly higher (P = 0.027) for females than those for males in Indian children. This could be attributed to the anatomical and behavioral differences among females and males. Further, pubertal changes set in earlier for females approximating an adult-like physiology, thereby leading to higher DSI values in them. The mean DSI value obtained for male and female Indian children can be used as a preliminary reference data against which the DSI values of school-age children with dysphonia can be compared. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. "I assumed that one was a placebo": exploring the consent process in a sham controlled acupressure trial.

    Science.gov (United States)

    Hughes, John Gareth; Russell, Wanda; Breckons, Matthew; Richardson, Janet; Lloyd-Williams, Mari; Molassiotis, Alex

    2014-10-01

    In clinical trials where participants are likely to be able to distinguish between true and sham interventions, informing participants that they may receive a sham intervention increases the likelihood of participants 'breaking the blind' and invalidating trial findings. The present study explored participants' perceptions of the consent process in a sham controlled acupressure trial which did not explicitly indicate participants may receive a sham intervention. Nested qualitative study within a randomised sham controlled trial of acupressure wristbands for chemotherapy-related nausea. Convenience sample of 26 patients participated in semi-structured interviews. Interviews were audio-recorded and transcribed verbatim. Transcripts analysed thematically using framework analysis. Study conducted within three geographical sites in the UK: Manchester, Liverpool, and Plymouth. All participants indicated that they believed they were fully informed when providing written consent to participate in the trial. Participants' perceived it was acceptable to employ a sham intervention within the trial of acupressure wristbands without informing potential participants that they may receive a sham treatment. Despite the fact that participants were not informed that one of the treatment arms was a sham intervention the majority indicated they assumed one of the treatment arms would be placebo. Many trials of acupuncture and acupressure do not inform participants they may receive a sham intervention. The current study indicates patients' perceive this approach to the consent process as acceptable. However, the fact participants assume one treatment may be placebo threatens the methodological basis for utilising this approach to the consent process. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Typicality Mediates Performance during Category Verification in Both Ad-Hoc and Well-Defined Categories

    Science.gov (United States)

    Sandberg, Chaleece; Sebastian, Rajani; Kiran, Swathi

    2012-01-01

    Background: The typicality effect is present in neurologically intact populations for natural, ad-hoc, and well-defined categories. Although sparse, there is evidence of typicality effects in persons with chronic stroke aphasia for natural and ad-hoc categories. However, it is unknown exactly what influences the typicality effect in this…

  13. Exposure-response relationship of typical and atypical antipsychotics assessed by the positive and negative syndrome scale (PANSS) and its subscales

    NARCIS (Netherlands)

    Pilla Reddy, Venkatesh; Suleiman, Ahmed; Kozielska, Magdalena; Johnson, Martin; Vermeulen, An; Liu, Jing; de Greef, Rik; Groothuis, Genoveva; Danhof, Meindert; Proost, Johannes

    2011-01-01

    Objectives: It has been suggested that atypical antipsychotics (ATAPs), are more effective towards negative symptoms than typical antipsychotics (TAPs) in schizophrenic patients.[1,2] To quantify the above statement, we aimed i) to develop a PK-PD model that characterizes the time course of PANSS

  14. Using a latent variable model with non-constant factor loadings to examine PM2.5 constituents related to secondary inorganic aerosols.

    Science.gov (United States)

    Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N

    2016-04-01

    Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.

  15. Narrative versus Style: Effect of Genre Typical Events versus Genre Typical Filmic Realizations on Film Viewers' Genre Recognition

    OpenAIRE

    Visch, V.; Tan, E.

    2008-01-01

    This study investigated whether film viewers recognize four basic genres (comic, drama, action and nonfiction) on the basis of genre-typical event cues or of genretypical filmic realization cues of events. Event cues are similar to the narrative content of a film sequence, while filmic realization cues are similar to stylistic surface cues of a film sequence. It was predicted that genre recognition of short film fragments is cued more by filmic realization cues than by event cues. The results...

  16. Conformally flat tilted Bianchi Type-V cosmological models in ...

    Indian Academy of Sciences (India)

    the complete determination of these quantities, we assume two extra conditions. First we assume that the space-time is conformally flat which leads to. 1008 .... Discussions. The model starts expanding with a big-bang at М = 0 and the expansion in the model stops at М = ∞ and = -2(Т + 2)¬. The model in general represents.

  17. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  18. The allele-frequency spectrum in a decoupled Moran model with mutation, drift, and directional selection, assuming small mutation rates.

    Science.gov (United States)

    Vogl, Claus; Clemente, Florian

    2012-05-01

    We analyze a decoupled Moran model with haploid population size N, a biallelic locus under mutation and drift with scaled forward and backward mutation rates θ(1)=μ(1)N and θ(0)=μ(0)N, and directional selection with scaled strength γ=sN. With small scaled mutation rates θ(0) and θ(1), which is appropriate for single nucleotide polymorphism data in highly recombining regions, we derive a simple approximate equilibrium distribution for polymorphic alleles with a constant of proportionality. We also put forth an even simpler model, where all mutations originate from monomorphic states. Using this model we derive the sojourn times, conditional on the ancestral and fixed allele, and under equilibrium the distributions of fixed and polymorphic alleles and fixation rates. Furthermore, we also derive the distribution of small samples in the diffusion limit and provide convenient recurrence relations for calculating this distribution. This enables us to give formulas analogous to the Ewens-Watterson estimator of θ for biased mutation rates and selection. We apply this theory to a polymorphism dataset of fourfold degenerate sites in Drosophila melanogaster. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Effects of Biofeedback on Control and Generalization of Nasalization in Typical Speakers

    Science.gov (United States)

    Murray, Elizabeth S. Heller; Mendoza, Joseph O.; Gill, Simone V.; Perkell, Joseph S.; Stepp, Cara E.

    2016-01-01

    Purpose: The purpose of this study was to determine the effects of biofeedback on control of nasalization in individuals with typical speech. Method: Forty-eight individuals with typical speech attempted to increase and decrease vowel nasalization. During training, stimuli consisted of consonant-vowel-consonant (CVC) tokens with the center vowels…

  20. Ecosystem responses to warming and watering in typical and desert steppes

    OpenAIRE

    Zhenzhu Xu; Yanhui Hou; Lihua Zhang; Tao Liu; Guangsheng Zhou

    2016-01-01

    Global warming is projected to continue, leading to intense fluctuations in precipitation and heat waves and thereby affecting the productivity and the relevant biological processes of grassland ecosystems. Here, we determined the functional responses to warming and altered precipitation in both typical and desert steppes. The results showed that watering markedly increased the aboveground net primary productivity (ANPP) in a typical steppe during a drier year and in a desert steppe over two ...

  1. Call Admission Scheme for Multidimensional Traffic Assuming Finite Handoff User

    Directory of Open Access Journals (Sweden)

    Md. Baitul Al Sadi

    2017-01-01

    Full Text Available Usually, the number of users within a cell in a mobile cellular network is considered infinite; hence, M/M/n/k model is appropriate for new originated traffic, but the number of ongoing calls around a cell is always finite. Hence, the traffic model of handoff call will be M/M/n/k/N. In this paper, a K-dimensional traffic model of a mobile cellular network is proposed using the combination of limited and unlimited users case. A new call admission scheme (CAS is proposed based on both thinning scheme and fading condition. The fading condition of the wireless channel access to a handoff call is prioritized compared to newly originated calls.

  2. South American Youth and Integration : Typical Situations and Youth ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    South American Youth and Integration : Typical Situations and Youth ... IDRC partner the World Economic Forum is building a hub for inclusive growth ... Brazil, Paraguay and Uruguay) and their perception of rights, democracy and regional.

  3. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    NARCIS (Netherlands)

    Zhu, W.; Timmermans, H.J.P.

    2011-01-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may

  4. Optimality models in the age of experimental evolution and genomics.

    Science.gov (United States)

    Bull, J J; Wang, I-N

    2010-09-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimental context with a well-researched organism allows dissection of the evolutionary process to identify causes of model failure--whether the model is wrong about genetics or selection. Second, optimality models provide a meaningful context for the process and mechanics of evolution, and thus may be used to elicit realistic genetic bases of adaptation--an especially useful augmentation to well-researched genetic systems. A few studies of microbes have begun to pioneer this new direction. Incompatibility between the assumed and actual genetics has been demonstrated to be the cause of model failure in some cases. More interestingly, evolution at the phenotypic level has sometimes matched prediction even though the adaptive mutations defy mechanisms established by decades of classic genetic studies. Integration of experimental evolutionary tests with genetics heralds a new wave for optimality models and their extensions that does not merely emphasize the forces driving evolution.

  5. A time-dependent anisotropic plasma chemistry model of the Io plasma torus

    Science.gov (United States)

    Arridge, C. S.

    2016-12-01

    The physics of the Io plasma torus is typically modelled using one box neutral-plasma chemistry models, often referred to as neutral cloud theory models (e.g., Barbosa 1994; Delamere and Bagenal 2003). These models incorporate electron impact and photoionisation, charge exchange, molecular dissociation/recombination reactions, atomic radiatiative losses and Coulomb collisional heating. Isotropic Maxwellian distributions are usually assumed in the implementation of these models. Observationally a population of suprathermal electrons has been identified in the plasma torus and theoretically they have been shown to be important in reproducing the observed ionisation balance in the torus (e.g., Barbosa 1994). In this paper we describe an anisotropic plasma chemistry model for the Io torus that is inspired by ion cyclotron wave observations (Huddleston et al. 1994; Leisner et al. 2011), ion anisotropies due to pick up (Wilson et al. 2008), and theoretical ideas on the maintenance of the suprathermal electron population (Barbosa 1994). We present both steady state calculations and also time varying solutions (e.g., Delamere et al. 2004) where increases in the neutral source rate in the torus generates perturbations in ion anisotropies that subsequently decay over a timescale much longer than the duration of the initial perturbation. We also present a method for incorporating uncertainties in reaction rates into the model.

  6. Bias correction in species distribution models: pooling survey and collection data for multiple species.

    Science.gov (United States)

    Fithian, William; Elith, Jane; Hastie, Trevor; Keith, David A

    2015-04-01

    Presence-only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. Presence-absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.We proposed a probabilistic model to allow for joint analysis of presence-only and survey data to exploit their complementary strengths. Our method pools presence-only and presence-absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence-only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence-only data.We evaluate our model's performance on data for 36 eucalypt species in south-eastern Australia. We find that presence-only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data-pooling technique substantially improves the out-of-sample predictive performance of our model when the amount of available presence-absence data for a given species is scarceIf we have only presence-only data and no presence-absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species' geographic range.

  7. Generalized indices of a typical individual water-heating solar plant in the climatic conditions of Russia different regions

    International Nuclear Information System (INIS)

    Popel', O.S.; Frid, S.E.; Shpil'rajn, Eh.Eh.

    2003-01-01

    By the example of the typical solar water-heating plant (SWP), designed for daily consumption of 100 l of heated water the calculation of the number of days in the year is accomplished, during which such a plant could provide for heating the water not below the assigned control level of 37, 45 and 55 deg C for various ratios between the solar collector square and tank-accumulator volume. The generalized dependences are obtained on the basis of processing the results of the SWP dynamic modeling with application of the typical meteoyears, generated for the climatic conditions of more than 40 populated localities in Russia both in its European and Asian part. The efficiency of the SWP operation in different regions of the country may be determined through their application [ru

  8. Ballistic Characterization Of A Typical Military Steel Helmet

    Directory of Open Access Journals (Sweden)

    Mohamed Ali Maher

    2017-08-01

    Full Text Available In this study the ballistic limit of a steel helmet against a FMJ 919 mm caliber bullet is estimated. The helmet model is the typical polish helmet wz.31.The helmet material showed high strength low alloy steel material of 0.28 carbon content and 9.125 kgm2 areal density. The tensile test according to ASTM E8 showed a tensile strength of 1236.4 MPa .The average hardness value was about HV550. First shooting experiment has been executed using a 9 mm pistol based on 350 ms muzzle velocity at 5m against the simply supported helmet complete penetrations rose in this test were in the form of cracks on the helmet surface and partial penetrations were in the form of craters on the surface whose largest diameter and depth were 43 mm and 20.2 mm consequently .The second experiment was on a rifled gun arrangement 13 bullets of 919 mm caliber were shot on the examined simply supported steel helmet at a zero obliquity angle at different velocities to determine the ballistic limit velocity V50 according to MIL-STD-662F. Three major outcomes were revealed 1 the value V50 which found to be about 390 ms is higher than the one found in literature 360 ms German steel helmet model 1A1. 2 The smallest the standard deviation of the mixed results zone data the most accurate the ballistic limit is. 3Similar to the performance of blunt-ended projectiles impacting overmatching targets tD near 11 or larger It was found that the dominating failure mode of the steel helmet stuck by a hemispherical-nose projectile was plugging mode despite of having tD ratio of about 19 undermatching.

  9. Indoor PM2.5 exposure in London's domestic stock: Modelling current and future exposures following energy efficient refurbishment

    Science.gov (United States)

    Shrubsole, C.; Ridley, I.; Biddulph, P.; Milner, J.; Vardoulakis, S.; Ucci, M.; Wilkinson, P.; Chalabi, Z.; Davies, M.

    2012-12-01

    Simulations using CONTAM (a validated multi-zone indoor air quality (IAQ) model) are employed to predict indoor exposure to PM2.5 in London dwellings in both the present day housing stock and the same stock following energy efficient refurbishments to meet greenhouse gas emissions reduction targets for 2050. We modelled interventions that would contribute to the achievement of these targets by reducing the permeability of the dwellings to 3 m3 m-2 h-1 at 50 Pa, combined with the introduction of mechanical ventilation and heat recovery (MVHR) systems. It is assumed that the current mean outdoor PM2.5 concentration of 13 μg m-3 decreased to 9 μg m-3 by 2050 due to emission control policies. Our primary finding was that installation of (assumed perfectly functioning) MVHR systems with permeability reduction are associated with appreciable reductions in PM2.5 exposure in both smoking and non-smoking dwellings. Modelling of the future scenario for non-smoking dwellings show a reduction in annual average indoor exposure to PM2.5 of 18.8 μg m-3 (from 28.4 to 9.6 μg m-3) for a typical household member. Also of interest is that a larger reduction of 42.6 μg m-3 (from 60.5 to 17.9 μg m-3) was shown for members exposed primarily to cooking-related particle emissions in the kitchen (cooks). Reductions in envelope permeability without mechanical ventilation produced increases in indoor PM2.5 concentrations; 5.4 μg m-3 for typical household members and 9.8 μg m-3 for cooks. These estimates of changes in PM2.5 exposure are sensitive to assumptions about occupant behaviour, ventilation system usage and the distributions of input variables (±72% for non-smoking and ±107% in smoking residences). However, if realised, they would result in significant health benefits.

  10. Analytical Model for Hook Anchor Pull-Out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, Jens Peder; Adamsen, Peter

    1995-01-01

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...

  11. Analytical Model for Hook Anchor Pull-out

    DEFF Research Database (Denmark)

    Brincker, Rune; Ulfkjær, J. P.; Adamsen, P.

    A simple analytical model for the pull-out of a hook anchor is presented. The model is based on a simplified version of the fictitious crack model. It is assumed that the fracture process is the pull-off of a cone shaped concrete part, simplifying the problem by assuming pure rigid body motions...... allowing elastic deformations only in a layer between the pull-out cone and the concrete base. The derived model is in good agreement with experimental results, it predicts size effects and the model parameters found by calibration of the model on experimental data are in good agreement with what should...

  12. Metabolic disorders with typical alterations in MRI

    International Nuclear Information System (INIS)

    Warmuth-Metz, M.

    2010-01-01

    The classification of metabolic disorders according to the etiology is not practical for neuroradiological purposes because the underlying defect does not uniformly transform into morphological characteristics. Therefore typical MR and clinical features of some easily identifiable metabolic disorders are presented. Canavan disease, Pelizaeus-Merzbacher disease, Alexander disease, X-chromosomal adrenoleukodystrophy and adrenomyeloneuropathy, mitochondrial disorders, such as MELAS (mitochondrial encephalopathy, lactic acidosis, and stroke-like episodes) and Leigh syndrome as well as L-2-hydroxyglutaric aciduria are presented. (orig.) [de

  13. Preparing for creative responses to “beyond assumed level” disasters: lessons from the ICT management in the 2011 Great East Japan earthquake crisis

    Directory of Open Access Journals (Sweden)

    Mihoko Sakurai

    2012-12-01

    Full Text Available A survey of the municipal government ICT divisions during and after the 2011 Great East Japan Earthquake and Tsunami crisis reveals the need for creative responses for “beyond assumed level” disasters. Complexity and diversity of the damage were simply too great for any plans to assume. Resident needs toward the municipal governments were also diverse and changed quickly as the time went by. The research also indicates that there would be ways to strengthen the capabilities to execute such spontaneous responses. Creative solutions executed during the 3.11 crisis were supported by the existence of open source software available on the net and skilled engineers that were capable of exploiting them. Frugal information system will be useful to improve preparedness for creative responses

  14. Comparative analysis of steady state heat transfer in a TBC and ...

    Indian Academy of Sciences (India)

    C, corrosion problems assume significance and protective coatings ... heat transfer studies for FGMs have been conducted by various researchers .... Figure 2. FGM model of turbine blade with profile of NACA0012 airfoil. .... A series of simulation runs for the FEM model for typical gas turbine ..... Intl. J. Heat and Fluid Flow.

  15. Distinct element modelling of joint behavior in nearfield rock

    International Nuclear Information System (INIS)

    Hoekmark, H.; Israelsson, J.

    1991-09-01

    The investigation reported here concerns numerical simulations of the behaviour of the jointed rock mass in the nearest surroundings Of a portion of a KBS3 type tunnel, including one deposition hole. Results from three-dimensional models are presented and compared to results obtained from previous investigations of two-dimensional models. The three-dimensional models and the previous two-dimensional models relate to conditions prevailing in and around the BMT drift in Stripa mine. In particular are the importance of conditions, implicitly assumed in two-dimensional models, regarding joint orientation and joint persistence, investigated. The evaluation of the results is focused on effects on joint apertures. The implications regarding rock permeability is discussed for a couple of cases. It is found that the real three-dimensional geometry is of great importance, and that the two-dimensional models in some cases tend to overestimate the magnitudes of inelastic joint displacements and associated aperture changes considerably, i.e. the real three-dimensional situation implies locking effects, that generally stabilizes the block assembly. It is recommended that further three-dimensional simulations should be performed to determine relevant ranges of alteration of fracture apertures, caused by excavation and thermal processes, and that fracture geometries, that are typical to virgin granitic rock, should be defined and used as input for these simulations. (au)

  16. The non-typical MRI findings of the branchial cleft cysts

    International Nuclear Information System (INIS)

    Hu Chunhong; Wu Qingde; Yao Xuanjun; Chen Jie; Zhu Wei; Chen Jianhua; Xing Jianming; Ding Yi; Ge Zili

    2006-01-01

    Objective: To investigate the non-typical MRI findings of the branchial cleft cysts in order to improve their diagnoses. Methods: 10 cases with branchial cleft cysts proven by surgery and pathology were collected and their MRI features were analyzed. There were 6 male and 4 female, aged 15 to 70, with an averaged age of 37. All patients underwent plain MR scan, 6 patients underwent enhanced scan, and 4 patients underwent magnetic resonance angiography. Results: All 10 cases were second branchial cleft cysts, including 4 of Bailey type I and 6 of type II. The non-typical MRI findings were composed of haematocele (2 cases), extraordinarily thick cyst wall (4 cases), solidified cystic fluid (2 cases), and concomitant canceration (2 cases), which made the diagnoses more difficult. Conclusion: The diagnoses of the branchial cleft cysts with non-typical MRI features should combined with its characteristic of position that located at the lateral portion of the neck adjacent to the anterior border of the sternocleidomastoid muscle at the mandibular angle. The findings, such as thickened wall, ill-defined margin, and vascular involvement or jugular lymphadenectasis, strongly suggest cancerous tendency. (authors)

  17. A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA

    NARCIS (Netherlands)

    WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value

    1993-01-01

    In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing

  18. Physical characteristics and resistance parameters of typical urban cyclists.

    Science.gov (United States)

    Tengattini, Simone; Bigazzi, Alexander York

    2018-03-30

    This study investigates the rolling and drag resistance parameters and bicycle and cargo masses of typical urban cyclists. These factors are important for modelling of cyclist speed, power and energy expenditure, with applications including exercise performance, health and safety assessments and transportation network analysis. However, representative values for diverse urban travellers have not been established. Resistance parameters were measured utilizing a field coast-down test for 557 intercepted cyclists in Vancouver, Canada. Masses were also measured, along with other bicycle attributes such as tire pressure and size. The average (standard deviation) of coefficient of rolling resistance, effective frontal area, bicycle plus cargo mass, and bicycle-only mass were 0.0077 (0.0036), 0.559 (0.170) m 2 , 18.3 (4.1) kg, and 13.7 (3.3) kg, respectively. The range of measured values is wider and higher than suggested in existing literature, which focusses on sport cyclists. Significant correlations are identified between resistance parameters and rider and bicycle attributes, indicating higher resistance parameters for less sport-oriented cyclists. The findings of this study are important for appropriately characterising the full range of urban cyclists, including commuters and casual riders.

  19. DIGA/NSL new calculational model in slab geometry

    International Nuclear Information System (INIS)

    Makai, M.; Gado, J.; Kereszturi, A.

    1987-04-01

    A new calculational model is presented based on a modified finite-difference algorithm, in which the coefficients are determined by means of the so-called gamma matrices. The DIGA program determines the gamma matrices and the NSL program realizes the modified finite difference model. Both programs assume slab cell geometry, DIGA assumes 2 energy groups and 3 diffusive regions. The DIGA/NSL programs serve to study the new calculational model. (author)

  20. Determination of illuminants representing typical white light emitting diodes sources

    DEFF Research Database (Denmark)

    Jost, S.; Ngo, M.; Ferrero, A.

    2017-01-01

    is to develop LED-based illuminants that describe typical white LED products based on their Spectral Power Distributions (SPDs). Some of these new illuminants will be recommended in the update of the CIE publication 15 on colorimetry with the other typical illuminants, and among them, some could be used......Solid-state lighting (SSL) products are already in use by consumers and are rapidly gaining the lighting market. Especially, white Light Emitting Diode (LED) sources are replacing banned incandescent lamps and other lighting technologies in most general lighting applications. The aim of this work...... to complement the CIE standard illuminant A for calibration use in photometry....

  1. Contribution of parenting to complex syntax development in preschool children with developmental delays or typical development.

    Science.gov (United States)

    Moody, C T; Baker, B L; Blacher, J

    2018-05-10

    Despite studies of how parent-child interactions relate to early child language development, few have examined the continued contribution of parenting to more complex language skills through the preschool years. The current study explored how positive and negative parenting behaviours relate to growth in complex syntax learning from child age 3 to age 4 years, for children with typical development or developmental delays (DDs). Participants were children with or without DD (N = 60) participating in a longitudinal study of development. Parent-child interactions were transcribed and coded for parenting domains and child language. Multiple regression analyses were used to identify the contribution of parenting to complex syntax growth in children with typical development or DD. Analyses supported a final model, F(9,50) = 11.90, P < .001, including a significant three-way interaction between positive parenting behaviours, negative parenting behaviours and child delay status. This model explained 68.16% of the variance in children's complex syntax at age 4. Simple two-way interactions indicated differing effects of parenting variables for children with or without DD. Results have implications for understanding of complex syntax acquisition in young children, as well as implications for interventions. © 2018 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  2. Operator spin foam models

    International Nuclear Information System (INIS)

    Bahr, Benjamin; Hellmann, Frank; Kaminski, Wojciech; Kisielowski, Marcin; Lewandowski, Jerzy

    2011-01-01

    The goal of this paper is to introduce a systematic approach to spin foams. We define operator spin foams, that is foams labelled by group representations and operators, as our main tool. A set of moves we define in the set of the operator spin foams (among other operations) allows us to split the faces and the edges of the foams. We assign to each operator spin foam a contracted operator, by using the contractions at the vertices and suitably adjusted face amplitudes. The emergence of the face amplitudes is the consequence of assuming the invariance of the contracted operator with respect to the moves. Next, we define spin foam models and consider the class of models assumed to be symmetric with respect to the moves we have introduced, and assuming their partition functions (state sums) are defined by the contracted operators. Briefly speaking, those operator spin foam models are invariant with respect to the cellular decomposition, and are sensitive only to the topology and colouring of the foam. Imposing an extra symmetry leads to a family we call natural operator spin foam models. This symmetry, combined with assumed invariance with respect to the edge splitting move, determines a complete characterization of a general natural model. It can be obtained by applying arbitrary (quantum) constraints on an arbitrary BF spin foam model. In particular, imposing suitable constraints on a spin(4) BF spin foam model is exactly the way we tend to view 4D quantum gravity, starting with the BC model and continuing with the Engle-Pereira-Rovelli-Livine (EPRL) or Freidel-Krasnov (FK) models. That makes our framework directly applicable to those models. Specifically, our operator spin foam framework can be translated into the language of spin foams and partition functions. Among our natural spin foam models there are the BF spin foam model, the BC model, and a model corresponding to the EPRL intertwiners. Our operator spin foam framework can also be used for more general spin

  3. [Research on developping the spectral dataset for Dunhuang typical colors based on color constancy].

    Science.gov (United States)

    Liu, Qiang; Wan, Xiao-Xia; Liu, Zhen; Li, Chan; Liang, Jin-Xing

    2013-11-01

    The present paper aims at developping a method to reasonably set up the typical spectral color dataset for different kinds of Chinese cultural heritage in color rendering process. The world famous wall paintings dating from more than 1700 years ago in Dunhuang Mogao Grottoes was taken as typical case in this research. In order to maintain the color constancy during the color rendering workflow of Dunhuang culture relics, a chromatic adaptation based method for developping the spectral dataset of typical colors for those wall paintings was proposed from the view point of human vision perception ability. Under the help and guidance of researchers in the art-research institution and protection-research institution of Dunhuang Academy and according to the existing research achievement of Dunhuang Research in the past years, 48 typical known Dunhuang pigments were chosen and 240 representative color samples were made with reflective spectral ranging from 360 to 750 nm was acquired by a spectrometer. In order to find the typical colors of the above mentioned color samples, the original dataset was devided into several subgroups by clustering analysis. The grouping number, together with the most typical samples for each subgroup which made up the firstly built typical color dataset, was determined by wilcoxon signed rank test according to the color inconstancy index comprehensively calculated under 6 typical illuminating conditions. Considering the completeness of gamut of Dunhuang wall paintings, 8 complementary colors was determined and finally the typical spectral color dataset was built up which contains 100 representative spectral colors. The analytical calculating results show that the median color inconstancy index of the built dataset in 99% confidence level by wilcoxon signed rank test was 3.28 and the 100 colors are distributing in the whole gamut uniformly, which ensures that this dataset can provide reasonable reference for choosing the color with highest

  4. Effects of an assumed cosmic ray-modulated low global cloud cover on the Earth's temperature

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez, J.; Mendoza, B. [Instituto de Geofisica, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Mendoza, V.; Adem, J. [Centro de Ciencias de la Atmosfera, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico)]. E-mail: victor@atmosfera.unam.mx

    2006-07-15

    We have used the Thermodynamic Model of the Climate to estimate the effect of variations in the low cloud cover on the surface temperature of the Earth in the Northern Hemisphere during the period 1984-1994. We assume that the variations in the low cloud cover are proportional to the variation of the cosmic ray flux measured during the same period. The results indicate that the effect in the surface temperature is more significant in the continents, where for July of 1991, we have found anomalies of the order of 0.7 degrees Celsius for the southeastern of Asia and 0.5 degrees Celsius for the northeast of Mexico. For an increase of 0.75% in the low cloud cover, the surface temperature computed by the model in the North Hemisphere presents a decrease of {approx} 0.11 degrees Celsius; however, for a decrease of 0.90% in the low cloud cover, the model gives an increase in the surface temperature of {approx} 0.15 degrees Celsius, these two cases correspond to a climate sensitivity factor for the case of forcing by duplication of atmospheric CO{sub 2}. These decreases or increases in surface temperature by increases of decreases in low clouds cover are ten times greater than the overall variability of the non-forced model time series. [Spanish] Hemos usado el Modelo Termodinamico del Clima para estimar el efecto de variaciones en la cubierta de nubes bajas sobre la temperatura superficial de la Tierra en el Hemisferio Norte durante el periodo 1984 - 1994. Suponemos que las variaciones en la cubierta de nubes bajas son proporcionales a las variaciones del flujo de rayos cosmicos medido durante el mismo periodo. Los resultados indican que el efecto en la temperatura es mas significativo en los continentes, donde para julio de 1991, hemos encontrado anomalias del orden de 0.7 grados Celsius sobre el sureste de Asia y 0.5 grados Celsius al noreste de Mexico. Para un incremento de 0.75% en la cubierta de nubes bajas, la temperatura de la superficie calculada por el modelo en

  5. Exact and Numerical Solutions of a Spatially-Distributed Mathematical Model for Fluid and Solute Transport in Peritoneal Dialysis

    Directory of Open Access Journals (Sweden)

    Roman Cherniha

    2016-06-01

    Full Text Available The nonlinear mathematical model for solute and fluid transport induced by the osmotic pressure of glucose and albumin with the dependence of several parameters on the hydrostatic pressure is described. In particular, the fractional space available for macromolecules (albumin was used as a typical example and fractional fluid void volume were assumed to be different functions of hydrostatic pressure. In order to find non-uniform steady-state solutions analytically, some mathematical restrictions on the model parameters were applied. Exact formulae (involving hypergeometric functions for the density of fluid flux from blood to tissue and the fluid flux across tissues were constructed. In order to justify the applicability of the analytical results obtained, a wide range of numerical simulations were performed. It was found that the analytical formulae can describe with good approximation the fluid and solute transport (especially the rate of ultrafiltration for a wide range of values of the model parameters.

  6. Effects of snow grain shape on climate simulations: sensitivity tests with the Norwegian Earth System Model

    Science.gov (United States)

    Räisänen, Petri; Makkonen, Risto; Kirkevåg, Alf; Debernard, Jens B.

    2017-12-01

    Snow consists of non-spherical grains of various shapes and sizes. Still, in radiative transfer calculations, snow grains are often treated as spherical. This also applies to the computation of snow albedo in the Snow, Ice, and Aerosol Radiation (SNICAR) model and in the Los Alamos sea ice model, version 4 (CICE4), both of which are employed in the Community Earth System Model and in the Norwegian Earth System Model (NorESM). In this study, we evaluate the effect of snow grain shape on climate simulated by NorESM in a slab ocean configuration of the model. An experiment with spherical snow grains (SPH) is compared with another (NONSPH) in which the snow shortwave single-scattering properties are based on a combination of three non-spherical snow grain shapes optimized using measurements of angular scattering by blowing snow. The key difference between these treatments is that the asymmetry parameter is smaller in the non-spherical case (0.77-0.78 in the visible region) than in the spherical case ( ≈ 0.89). Therefore, for the same effective snow grain size (or equivalently, the same specific projected area), the snow broadband albedo is higher when assuming non-spherical rather than spherical snow grains, typically by 0.02-0.03. Considering the spherical case as the baseline, this results in an instantaneous negative change in net shortwave radiation with a global-mean top-of-the-model value of ca. -0.22 W m-2. Although this global-mean radiative effect is rather modest, the impacts on the climate simulated by NorESM are substantial. The global annual-mean 2 m air temperature in NONSPH is 1.17 K lower than in SPH, with substantially larger differences at high latitudes. The climatic response is amplified by strong snow and sea ice feedbacks. It is further demonstrated that the effect of snow grain shape could be largely offset by adjusting the snow grain size. When assuming non-spherical snow grains with the parameterized grain size increased by ca. 70 %, the

  7. Development of Gender Discrimination: Effect of Sex-Typical and Sex-Atypical Toys.

    Science.gov (United States)

    Etaugh, Claire; Duits, Terri L.

    Toddlers (41 girls and 35 boys) between 18 and 37 months of age were given four gender discrimination tasks each consisting of 6 pairs of color drawings. Three of the tasks employed color drawings of preschool girls and boys holding either a sex-typical toy, a sex-atypical toy, or no toy. The fourth employed pictures of sex-typical masculine and…

  8. Tumour control probability derived from dose distribution in homogeneous and heterogeneous models: assuming similar pharmacokinetics, 125Sn–177Lu is superior to 90Y–177Lu in peptide receptor radiotherapy

    International Nuclear Information System (INIS)

    Walrand, Stephan; Hanin, François-Xavier; Pauwels, Stanislas; Jamar, François

    2012-01-01

    Clinical trials on 177 Lu– 90 Y therapy used empirical activity ratios. Radionuclides (RN) with larger beta maximal range could favourably replace 90 Y. Our aim is to provide RN dose-deposition kernels and to compare the tumour control probability (TCP) of RN combinations. Dose kernels were derived by integration of the mono-energetic beta-ray dose distributions (computed using Monte Carlo) weighted by their respective beta spectrum. Nine homogeneous spherical tumours (1–25 mm in diameter) and four spherical tumours including a lattice of cold, but alive, spheres (1, 3, 5, 7 mm in diameter) were modelled. The TCP for 93 Y, 90 Y and 125 Sn in combination with 177 Lu in variable proportions (that kept constant the renal cortex biological effective dose) were derived by 3D dose kernel convolution. For a mean tumour-absorbed dose of 180 Gy, 2 mm homogeneous tumours and tumours including 3 mm diameter cold alive spheres were both well controlled (TCP > 0.9) using a 75–25% combination of 177 Lu and 90 Y activity. However, 125 Sn– 177 Lu achieved a significantly better result by controlling 1 mm-homogeneous tumour simultaneously with tumours including 5 mm diameter cold alive spheres. Clinical trials using RN combinations should use RN proportions tuned to the patient dosimetry. 125 Sn production and its coupling to somatostatin analogue appear feasible. Assuming similar pharmacokinetics 125 Sn is the best RN for combination with 177 Lu in peptide receptor radiotherapy justifying pharmacokinetics studies in rodent of 125 Sn-labelled somatostatin analogues. (paper)

  9. A case study and mechanism investigation of typical mortars used on ancient architecture in China

    International Nuclear Information System (INIS)

    Zeng Yuyao; Zhang Bingjian; Liang Xiaolin

    2008-01-01

    Mortars sampled from Dutifulness Monument, where typical ancient China mortar formulas and manufacturing processes were used, were analyzed by starch-iodine test, FTIR, DSC-TG, SEM and XRD methods. Several modeling samples were then made according to historical records of Chinese ancient mortar formulas and analyzed with the same techniques. The modeling formulas also were used to consolidate loose specimens. The results show that sticky rice plays a crucial role in the microstructure and the consolidation properties of lime mortars. A possible mechanism was suggested that biomineralization may occur during the carbonation of calcium hydroxide, where the sticky rice functions as a template and controls the growth of calcium carbonate crystal. The organic-inorganic materials formed based on this mechanism will be more favorable for consolidating the loose samples both in strength improvement and durability

  10. Infill Walls Contribution on the Progressive Collapse Resistance of a Typical Mid-rise RC Framed Building

    Science.gov (United States)

    Besoiu, Teodora; Popa, Anca

    2017-10-01

    This study investigates the effect of the autoclaved aerated concrete infill walls on the progressive collapse resistance of a typical RC framed structure. The 13-storey building located in Brăila (a zone with high seismic risk in Romania) was designed according to the former Romanian seismic code P13-70 (1970). Two models of the structure are generated in the Extreme Loading® for Structures computer software: a model with infill walls and a model without infill walls. Following GSA (2003) Guidelines, a nonlinear dynamic procedure is used to determine the progressive collapse risk of the building when a first-storey corner column is suddenly removed. It was found that, the structure is not expected to fail under the standard GSA loading: DL+0.25LL. Moreover, if the infill walls are introduced in the model, the maximum vertical displacement of the node above the removed column is reduced by about 48%.

  11. Variations of the stellar initial mass function in semi-analytical models - II. The impact of cosmic ray regulation

    Science.gov (United States)

    Fontanot, Fabio; De Lucia, Gabriella; Xie, Lizhi; Hirschmann, Michaela; Bruzual, Gustavo; Charlot, Stéphane

    2018-04-01

    Recent studies proposed that cosmic rays (CRs) are a key ingredient in setting the conditions for star formation, thanks to their ability to alter the thermal and chemical state of dense gas in the ultraviolet-shielded cores of molecular clouds. In this paper, we explore their role as regulators of the stellar initial mass function (IMF) variations, using the semi-analytic model for GAlaxy Evolution and Assembly (GAEA). The new model confirms our previous results obtained using the integrated galaxy-wide IMF (IGIMF) theory. Both variable IMF models reproduce the observed increase of α-enhancement as a function of stellar mass and the measured z = 0 excess of dynamical mass-to-light ratios with respect to photometric estimates assuming a universal IMF. We focus here on the mismatch between the photometrically derived (M^app_{\\star }) and intrinsic (M⋆) stellar masses, by analysing in detail the evolution of model galaxies with different values of M_{\\star }/M^app_{\\star }. We find that galaxies with small deviations (i.e. formally consistent with a universal IMF hypothesis) are characterized by more extended star formation histories and live in less massive haloes with respect to the bulk of the galaxy population. In particular, the IGIMF theory does not change significantly the mean evolution of model galaxies with respect to the reference model, a CR-regulated IMF instead implies shorter star formation histories and higher peaks of star formation for objects more massive than 1010.5 M⊙. However, we also show that it is difficult to unveil this behaviour from observations, as the key physical quantities are typically derived assuming a universal IMF.

  12. Resource Storage Management Model For Ensuring Quality Of Service In The Cloud Archive Systems

    Directory of Open Access Journals (Sweden)

    Mariusz Kapanowski

    2014-01-01

    Full Text Available Nowadays, service providers offer a lot of IT services in the public or private cloud. The client can buy various kinds of services like SaaS, PaaS, etc. Recently there was introduced Backup as a Service (BaaS as a variety of SaaS. At the moment there are available several different BaaSes for archiving the data in the cloud, but they provide only a basic level of service quality. In the paper we propose a model which ensures QoS for BaaS and some  methods for management of storage resources aimed at achieving the required SLA. This model introduces a set of parameters responsible for SLA level which can be offered on the basic or higher level of quality. The storage systems (typically HSM, which are distributed between several Data Centres,  are built based on disk arrays, VTLs, and tape libraries. The RSMM model does not assume bandwidth reservation or control, but is rather focused on the management of storage resources.

  13. Encoding dependence in Bayesian causal networks

    Science.gov (United States)

    Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...

  14. Line-Shape Code Comparison through Modeling and Fitting of Experimental Spectra of the C ii 723-nm Line Emitted by the Ablation Cloud of a Carbon Pellet

    Directory of Open Access Journals (Sweden)

    Mohammed Koubiti

    2014-07-01

    Full Text Available Various codes of line-shape modeling are compared to each other through the profile of the C ii 723-nm line for typical plasma conditions encountered in the ablation clouds of carbon pellets, injected in magnetic fusion devices. Calculations were performed for a single electron density of 1017 cm−3 and two plasma temperatures (T = 2 and 4 eV. Ion and electron temperatures were assumed to be equal (Te = Ti = T. The magnetic field, B, was set equal to either to zero or 4 T. Comparisons between the line-shape modeling codes and two experimental spectra of the C ii 723-nm line, measured perpendicularly to the B-field in the Large Helical Device (LHD using linear polarizers, are also discussed.

  15. 7 CFR 632.52 - Identifying typical classes of action.

    Science.gov (United States)

    2010-01-01

    ... § 632.52 Identifying typical classes of action. (a) The RFO will analyze the environmental assessment of....12. These actions are determined by a limited environmental assessment that reasonably identifies the... 632.52 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES...

  16. Soils apart from equilibrium – consequences for soil carbon balance modelling

    Directory of Open Access Journals (Sweden)

    T. Wutzler

    2007-01-01

    Full Text Available Many projections of the soil carbon sink or source are based on kinetically defined carbon pool models. Para-meters of these models are often determined in a way that the steady state of the model matches observed carbon stocks. The underlying simplifying assumption is that observed carbon stocks are near equilibrium. This assumption is challenged by observations of very old soils that do still accumulate carbon. In this modelling study we explored the consequences of the case where soils are apart from equilibrium. Calculation of equilibrium states of soils that are currently accumulating small amounts of carbon were performed using the Yasso model. It was found that already very small current accumulation rates cause big changes in theoretical equilibrium stocks, which can virtually approach infinity. We conclude that soils that have been disturbed several centuries ago are not in equilibrium but in a transient state because of the slowly ongoing accumulation of the slowest pool. A first consequence is that model calibrations to current carbon stocks that assume equilibrium state, overestimate the decay rate of the slowest pool. A second consequence is that spin-up runs (simulations until equilibrium overestimate stocks of recently disturbed sites. In order to account for these consequences, we propose a transient correction. This correction prescribes a lower decay rate of the slowest pool and accounts for disturbances in the past by decreasing the spin-up-run predicted stocks to match an independent estimate of current soil carbon stocks. Application of this transient correction at a Central European beech forest site with a typical disturbance history resulted in an additional carbon fixation of 5.7±1.5 tC/ha within 100 years. Carbon storage capacity of disturbed forest soils is potentially much higher than currently assumed. Simulations that do not adequately account for the transient state of soil carbon stocks neglect a considerable

  17. Implementation and Testing of Advanced Surface Boundary Conditions Over Complex Terrain in A Semi-idealized Model

    Science.gov (United States)

    Li, Y.; Epifanio, C.

    2017-12-01

    In numerical prediction models, the interaction between the Earth's surface and the atmosphere is typically accounted for in terms of surface layer parameterizations, whose main job is to specify turbulent fluxes of heat, moisture and momentum across the lower boundary of the model domain. In the case of a domain with complex geometry, implementing the flux conditions (particularly the tensor stress condition) at the boundary can be somewhat subtle, and there has been a notable history of confusion in the CFD community over how to formulate and impose such conditions generally. In the atmospheric case, modelers have largely been able to avoid these complications, at least until recently, by assuming that the terrain resolved at typical model resolutions is fairly gentle, in the sense of having relatively shallow slopes. This in turn allows the flux conditions to be imposed as if the lower boundary were essentially flat. Unfortunately, while this flat-boundary assumption is acceptable for coarse resolutions, as grids become more refined and the geometry of the resolved terrain becomes more complex, the appproach is less justified. With this in mind, the goal of our present study is to explore the implementation and usage of the full, unapproximated version of the turbulent flux/stress conditions in atmospheric models, thus taking full account of the complex geometry of the resolved terrain. We propose to implement the conditions using a semi-idealized model developed by Epifanio (2007), in which the discretized boundary conditions are reduced to a large, sparse-matrix problem. The emphasis will be on fluxes of momentum, as the tensor nature of this flux makes the associated stress condition more difficult to impose, although the flux conditions for heat and moisture will be considered as well. With the resulotion of 90 meters, some of the results show that the typical differences between flat-boundary cases and full/stress cases are on the order of 10%, with extreme

  18. For your local eyes only: Culture-specific face typicality influences perceptions of trustworthiness

    NARCIS (Netherlands)

    Sofer, C.; Dotsch, R.; Oikawa, M.; Oikawa, H.; Wigboldus, D.H.J.; Todorov, A.T.

    2017-01-01

    Recent findings show that typical faces are judged as more trustworthy than atypical faces. However, it is not clear whether employment of typicality cues in trustworthiness judgment happens across cultures and if these cues are culture specific. In two studies, conducted in Japan and Israel,

  19. Short-term cognitive improvement in schizophrenics treated with typical and atypical neuroleptics.

    Science.gov (United States)

    Rollnik, Jens D; Borsutzky, Marthias; Huber, Thomas J; Mogk, Hannu; Seifert, Jürgen; Emrich, Hinderk M; Schneider, Udo

    2002-01-01

    Atypical neuroleptics seem to be more beneficial than typical ones with respect to long-term neuropsychological functioning. Thus, most studies focus on the long-term effects of neuroleptics. We were interested in whether atypical neuroleptic treatment is also superior to typical drugs over relatively short periods of time. We studied 20 schizophrenic patients [10 males, mean age 35.5 years, mean Brief Psychiatric Rating Scale (BPRS) score at entry 58.9] admitted to our hospital with acute psychotic exacerbation. Nine of them were treated with typical and 11 with atypical neuroleptics. In addition, 14 healthy drug-free subjects (6 males, mean age 31.2 years) were enrolled in the study and compared to the patients. As neuropsychological tools, a divided attention test, the Vienna reaction time test, the Benton visual retention test, digit span and a Multiple Choice Word Fluency Test (MWT-B) were used during the first week after admission, within the third week and before discharge (approximately 3 months). Patients scored significantly worse than healthy controls on nearly all tests (except Vienna reaction time). Clinical ratings [BPRS and Positive and Negative Symptom Scale for Schizophrenia (PANSS)] improved markedly (p divided attention task (r = 0.705, p = 0.034). Neuropsychological functioning (explicit memory, p divided attention, p < 0.05) moderately improved for both groups under treatment but without a significant difference between atypical and typical antipsychotic drugs. Over short periods of time (3 months), neuropsychological disturbances in schizophrenia seem to be moderately responsive to both typical and atypical neuroleptics. Copyright 2002 S. Karger AG, Basel

  20. Typical skeletal changes due to metastasising neuroblastomas

    International Nuclear Information System (INIS)

    Eggerath, A.; Persigehl, M.; Mertens, R.; Technische Hochschule Aachen

    1983-01-01

    Compared with other solid tumours in childhood, neuroblastomas show a marked tendency to metastasise to the skeleton. The differentiation of these lesions from inflammatory and other malignant bone lesions in this age group is often difficult. The radiological findings in ten patients with metastasing and histologically confirmed neuroblastomas have been reviewed and the typical appearances in the skeleton are described. The most important features in the differential diagnosies are discussed and the significance of bone changes in the diagnosis of neuroblastoma have been evaluated. (orig.) [de

  1. Pollution characteristics and environmental risk assessment of typical veterinary antibiotics in livestock farms in Southeastern China.

    Science.gov (United States)

    Wang, Na; Guo, Xinyan; Xu, Jing; Kong, Xiangji; Gao, Shixiang; Shan, Zhengjun

    2014-01-01

    Scientific interest in pollution from antibiotics in animal husbandry has increased during recent years. However, there have been few studies on the vertical exposure characteristics of typical veterinary antibiotics in different exposure matrices from different livestock farms. This study explores the distribution and migration of antibiotics from feed to manure, from manure to soil, and from soil to vegetables, by investigating the exposure level of typical antibiotics in feed, manure, soil, vegetables, water, fish, and pork in livestock farms. A screening environmental risk assessment was conducted to identify the hazardous potential of veterinary antibiotics from livestock farms in southeast China. The results show that adding antibiotics to drinking water as well as the excessive use of antibiotic feed additives may become the major source of antibiotics pollution in livestock farms. Physical and chemical properties significantly affect the distribution and migration of various antibiotics from manure to soil and from soil to plant. Simple migration models can predict the accumulation of antibiotics in soil and plants. The environmental risk assessment results show that more attention should be paid to the terrestrial eco-risk of sulfadiazine, sulfamethazine, sulfamethoxazole, tetracycline, oxytetracycline, chlorotetracycline, ciprofloxacin, and enrofloxacin, and to the aquatic eco-risk of chlorotetracycline, ciprofloxacin, and enrofloxacin. This is the first systematic analysis of the vertical pollution characteristics of typical veterinary antibiotics in livestock farms in southeast China. It also identifies the ecological and human health risk of veterinary antibiotics.

  2. A Kinetics Model for KrF Laser Amplifiers

    Science.gov (United States)

    Giuliani, J. L.; Kepple, P.; Lehmberg, R.; Obenschain, S. P.; Petrov, G.

    1999-11-01

    A computer kinetics code has been developed to model the temporal and spatial behavior of an e-beam pumped KrF laser amplifier. The deposition of the primary beam electrons is assumed to be spatially uniform and the energy distribution function of the nascent electron population is calculated to be near Maxwellian below 10 eV. For an initial Kr/Ar/F2 composition, the code calculates the densities of 24 species subject to over 100 reactions with 1-D spatial resolution (typically 16 zones) along the longitudinal lasing axis. Enthalpy accounting for each process is performed to partition the energy into internal, thermal, and radiative components. The electron as well as the heavy particle temperatures are followed for energy conservation and excitation rates. Transport of the lasing photons is performed along the axis on a dense subgrid using the method of characteristics. Amplified spontaneous emission is calculated using a discrete ordinates approach and includes contributions to the local intensity from the whole amplifier volume. Specular reflection off side walls and the rear mirror are included. Results of the model will be compared with data from the NRL NIKE laser and other published results.

  3. Nonlinear modeling of magnetorheological energy absorbers under impact conditions

    Science.gov (United States)

    Mao, Min; Hu, Wei; Choi, Young-Tai; Wereley, Norman M.; Browne, Alan L.; Ulicny, John; Johnson, Nancy

    2013-11-01

    Magnetorheological energy absorbers (MREAs) provide adaptive vibration and shock mitigation capabilities to accommodate varying payloads, vibration spectra, and shock pulses, as well as other environmental factors. A key performance metric is the dynamic range, which is defined as the ratio of the force at maximum field to the force in the absence of field. The off-state force is typically assumed to increase linearly with speed, but at the higher shaft speeds occurring in impact events, the off-state damping exhibits nonlinear velocity squared damping effects. To improve understanding of MREA behavior under high-speed impact conditions, this study focuses on nonlinear MREA models that can more accurately predict MREA dynamic behavior for nominal impact speeds of up to 6 m s-1. Three models were examined in this study. First, a nonlinear Bingham-plastic (BP) model incorporating Darcy friction and fluid inertia (Unsteady-BP) was formulated where the force is proportional to the velocity. Second, a Bingham-plastic model incorporating minor loss factors and fluid inertia (Unsteady-BPM) to better account for high-speed behavior was formulated. Third, a hydromechanical (HM) analysis was developed to account for fluid compressibility and inertia as well as minor loss factors. These models were validated using drop test data obtained using the drop tower facility at GM R&D Center for nominal drop speeds of up to 6 m s-1.

  4. The logical primitives of thought: Empirical foundations for compositional cognitive models.

    Science.gov (United States)

    Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D

    2016-07-01

    The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Human Behavior, Learning, and the Developing Brain: Typical Development

    Science.gov (United States)

    Coch, Donna, Ed.; Fischer, Kurt W., Ed.; Dawson, Geraldine, Ed.

    2010-01-01

    This volume brings together leading authorities from multiple disciplines to examine the relationship between brain development and behavior in typically developing children. Presented are innovative cross-sectional and longitudinal studies that shed light on brain-behavior connections in infancy and toddlerhood through adolescence. Chapters…

  6. A PREDICTIVE STUDY: CARBON MONOXIDE EMISSION MODELING AT A SIGNALIZED INTERSECTION

    Directory of Open Access Journals (Sweden)

    FREDDY WEE LIANG KHO

    2014-02-01

    Full Text Available CAL3QHC dispersion model was used to predict the present and future carbonmonoxide (CO levels at a busy signalized intersection. This study attempted to identify CO “hot-spots” at nearby areas of the intersection during typical A.M. and P.M. peak hours. The CO concentration “hot-spots” had been identified at 101 Commercial Park and the simulated maximum 1-hour Time-Weighted Average (1-h TWA ground level CO concentrations of 18.3 ppm and 18.6 ppm had been observed during A.M. and P.M. peaks, respectively in year 2006. This study shows that there would be no significant increment in CO level for year 2014 although a substantial increase in the number of vehicles is assumed to affect CO levels. It was also found that CO levels would be well below the Malaysian Ambient Air Quality Guideline of 30 ppm (1-h TWA. Comparisons between the measured and simulated CO levels using quantitative data analysis technique and statistical methods indicated that CAL3QHC dispersion model correlated well with measured data.

  7. Small angle neutron scattering modeling of copper-rich precipitates in steel

    International Nuclear Information System (INIS)

    Spooner, S.

    1997-11-01

    The magnetic to nuclear scattering intensity ratio observed in the scattering from copper rich precipitates in irradiated pressure vessel steels is much smaller than the value of 11.4 expected for a pure copper precipitate in iron. A model for precipitates in pressure vessel steels which matches the observed scattering typically incorporates manganese, nickel, silicon and other elements and it is assumed that the precipitate is non-magnetic. In the present work consideration is given to the effect of composition gradients and ferromagnetic penetration into the precipitate on the small angle scattering cross section for copper rich clusters as distinguished from conventional precipitates. The calculation is an extension of a scattering model for micelles which consist of shells of varying scattering density. A discrepancy between recent SANS scattering experiments on pressure vessel steels was found to be related to applied magnetic field strength. The assumption of cluster structure and its relation to atom probe FIM findings as well as the effects of insufficient field for magnetic saturation is discussed

  8. Bayesian analysis of data and model error in rainfall-runoff hydrological models

    Science.gov (United States)

    Kavetski, D.; Franks, S. W.; Kuczera, G.

    2004-12-01

    A major unresolved issue in the identification and use of conceptual hydrologic models is realistic description of uncertainty in the data and model structure. In particular, hydrologic parameters often cannot be measured directly and must be inferred (calibrated) from observed forcing/response data (typically, rainfall and runoff). However, rainfall varies significantly in space and time, yet is often estimated from sparse gauge networks. Recent work showed that current calibration methods (e.g., standard least squares, multi-objective calibration, generalized likelihood uncertainty estimation) ignore forcing uncertainty and assume that the rainfall is known exactly. Consequently, they can yield strongly biased and misleading parameter estimates. This deficiency confounds attempts to reliably test model hypotheses, to generalize results across catchments (the regionalization problem) and to quantify predictive uncertainty when the hydrologic model is extrapolated. This paper continues the development of a Bayesian total error analysis (BATEA) methodology for the calibration and identification of hydrologic models, which explicitly incorporates the uncertainty in both the forcing and response data, and allows systematic model comparison based on residual model errors and formal Bayesian hypothesis testing (e.g., using Bayes factors). BATEA is based on explicit stochastic models for both forcing and response uncertainty, whereas current techniques focus solely on response errors. Hence, unlike existing methods, the BATEA parameter equations directly reflect the modeler's confidence in all the data. We compare several approaches to approximating the parameter distributions: a) full Markov Chain Monte Carlo methods and b) simplified approaches based on linear approximations. Studies using synthetic and real data from the US and Australia show that BATEA systematically reduces the parameter bias, leads to more meaningful model fits and allows model comparison taking

  9. Improved High Resolution Models of Subduction Dynamics: Use of transversely isotropic viscosity with a free-surface

    Science.gov (United States)

    Liu, X.; Gurnis, M.; Stadler, G.; Rudi, J.; Ratnaswamy, V.; Ghattas, O.

    2017-12-01

    Dynamic topography, or uncompensated topography, is controlled by internal dynamics, and provide constraints on the buoyancy structure and rheological parameters in the mantle. Compared with other surface manifestations such as the geoid, dynamic topography is very sensitive to shallower and more regional mantle structure. For example, the significant dynamic topography above the subduction zone potentially provides a rich mine for inferring the rheological and mechanical properties such as plate coupling, flow, and lateral viscosity variations, all critical in plate tectonics. However, employing subduction zone topography in the inversion study requires that we have a better understanding of the topography from forward models, especially the influence of the viscosity formulation, numerical resolution, and other factors. One common approach to formulating a fault between the subducted slab and the overriding plates in viscous flow models assumes a thin weak zone. However, due to the large lateral variation in viscosity, topography from free-slip numerical models typically has artificially large magnitude as well as high-frequency undulations over subduction zone, which adds to the difficulty in making comparisons between model results and observations. In this study, we formulate a weak zone with the transversely isotropic viscosity (TI) where the tangential viscosity is much smaller than the viscosity in the normal direction. Similar with isotropic weak zone models, TI models effectively decouple subducted slabs from the overriding plates. However, we find that the topography in TI models is largely reduced compared with that in weak zone models assuming an isotropic viscosity. Moreover, the artificial `tooth paste' squeezing effect observed in isotropic weak zone models vanishes in TI models, although the difference becomes less significant when the dip angle is small. We also implement a free-surface condition in our numerical models, which has a smoothing

  10. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  11. A critical look at the kinetic models of thermoluminescence-II. Non-first order kinetics

    International Nuclear Information System (INIS)

    Sunta, C M; Ayta, W E F; Chubaci, J F D; Watanabe, S

    2005-01-01

    Non-first order (FO) kinetics models are of three types; second order (SO), general order (GO) and mixed order (MO). It is shown that all three of these have constraints in their energy level schemes and their applicable parameter values. In nature such restrictions are not expected to exist. The thermoluminescence (TL) glow peaks produced by these models shift their position and change their shape as the trap occupancies change. Such characteristics are very unlike those found in samples of real materials. In these models, in general, retrapping predominates over recombination. It is shown that the quasi-equilibrium (QE) assumption implied in the derivation of the TL equation of these models is quite valid, thus disproving earlier workers' conclusion that QE cannot be held under retrapping dominant conditions. However notwithstanding their validity, they suffer from the shortcomings as stated above and have certain lacunae. For example, the kinetic order (KO) parameter and the pre-exponential factor which are assumed to be the constant parameters of the GO kinetics expression turn out to be variables when this expression is applied to plausible physical models. Further, in glow peak characterization using the GO expression, the quality of fit is found to deteriorate when the best fitted value of KO parameter is different from 1 and 2. This means that the found value of the basic parameter, namely the activation energy, becomes subject to error. In the MO kinetics model, the value of the KO parameter α would change with dose, and thus in this model also, as in the GO model, no single value of KO can be assigned to a given glow peak. The paper discusses TL of real materials having characteristics typically like those of FO kinetics. Theoretically too, a plausible physical model of TL emission produces glow peaks which have characteristics of FO kinetics under a wide variety of parametric combinations. In the background of the above findings, it is suggested that

  12. Adsorption Properties of Typical Lung Cancer Breath Gases on Ni-SWCNTs through Density Functional Theory

    Directory of Open Access Journals (Sweden)

    Qianqian Wan

    2017-01-01

    Full Text Available A lot of useful information is contained in the human breath gases, which makes it an effective way to diagnose diseases by detecting the typical breath gases. This work investigated the adsorption of typical lung cancer breath gases: benzene, styrene, isoprene, and 1-hexene onto the surface of intrinsic and Ni-doped single wall carbon nanotubes through density functional theory. Calculation results show that the typical lung cancer breath gases adsorb on intrinsic single wall carbon nanotubes surface by weak physisorption. Besides, the density of states changes little before and after typical lung cancer breath gases adsorption. Compared with single wall carbon nanotubes adsorption, single Ni atom doping significantly improves its adsorption properties to typical lung cancer breath gases by decreasing adsorption distance and increasing adsorption energy and charge transfer. The density of states presents different degrees of variation during the typical lung cancer breath gases adsorption, resulting in the specific change of conductivity of gas sensing material. Based on the different adsorption properties of Ni-SWCNTs to typical lung cancer breath gases, it provides an effective way to build a portable noninvasive portable device used to evaluate and diagnose lung cancer at early stage in time.

  13. Investigation on the toxic interaction of typical plasticizers with calf thymus DNA

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Xiaojing [School of Environmental Science and Engineering, China–America CRC for Environment & Health, Shandong University, 27# Shanda South Road, Jinan 250100, Shandong Province (China); Zong, Wansong, E-mail: gaocz@sdu.edu.cn [College of Population, Resources and Environment, Shandong Normal University, 88# East Wenhua Road, Jinan 250014 (China); Liu, Chunguang; Liu, Yang [School of Environmental Science and Engineering, China–America CRC for Environment & Health, Shandong University, 27# Shanda South Road, Jinan 250100, Shandong Province (China); Gao, Canzhu, E-mail: rutaoliu@sdu.edu.cn [School of Environmental Science and Engineering, China–America CRC for Environment & Health, Shandong University, 27# Shanda South Road, Jinan 250100, Shandong Province (China); Liu, Rutao [School of Environmental Science and Engineering, China–America CRC for Environment & Health, Shandong University, 27# Shanda South Road, Jinan 250100, Shandong Province (China)

    2015-05-15

    The interactions of typical plasticizers dimethyl phthalate (DMP), diethyl phthalate (DEP) and dibutyl phthalate (DBP) with calf thymus DNA (ctDNA) were investigated by fluorescence spectroscopic techniques and molecular modeling. Experimental results indicated that the characteristic fluorescence intensity of phthalic acid rose with the increase of DNA concentration; while the characteristic fluorescence intensities of plasticizers decreased with the increase of DNA concentration. Experiments on native and denatured DNA determined that plasticizers interacted with DNA both in groove and electrostatic binding mode. The molecular modeling results further illustrated that there is groove binding between them; hydrogen bonding and Van der Waals interactions were the main forces. With the extension of branched-chains, the binding effects between plasticizers and DNA were weakened, which could be related to the increased steric hindrance. - Highlights: • This work established the binding mode of plasticizers with DNA on molecular level. • The mechanism was explored by fluorescence spectroscopic and molecular docking methods. • There are two kinds of binding mode between DMP, DEP, DBP and DNA, electrostatic and groove. • With the branched chain extension, the binding effect of plasticizers and DNA has been weakened.

  14. Study on the knowledge base system for the identification of typical target

    International Nuclear Information System (INIS)

    Qin Kai; Zhao Yingjun

    2008-01-01

    Based on the research on target knowledge base, target database, texture analysis, shape analysis, this paper proposed a new knowledge based method for typical target identification from remote sensing image. By extracting the texture characters and shape characters, joining with spatial analysis in GIS, reasoning according to the prior knowledge in the knowledge base, this method can identify and ex- tract typical target from remote sensing images. (authors)

  15. Benchmark calculation programme concerning typical LMFBR structures

    International Nuclear Information System (INIS)

    Donea, J.; Ferrari, G.; Grossetie, J.C.; Terzaghi, A.

    1982-01-01

    This programme, which is part of a comprehensive activity aimed at resolving difficulties encountered in using design procedures based on ASME Code Case N-47, should allow to get confidence in computer codes which are supposed to provide a realistic prediction of the LMFBR component behaviour. The calculations started on static analysis of typical structures made of non linear materials stressed by cyclic loads. The fluid structure interaction analysis is also being considered. Reasons and details of the different benchmark calculations are described, results obtained are commented and future computational exercise indicated

  16. Group typicality, group loyalty and cognitive development.

    Science.gov (United States)

    Patterson, Meagan M

    2014-09-01

    Over the course of childhood, children's thinking about social groups changes in a variety of ways. Developmental Subjective Group Dynamics (DSGD) theory emphasizes children's understanding of the importance of conforming to group norms. Abrams et al.'s study, which uses DSGD theory as a framework, demonstrates the social cognitive skills underlying young elementary school children's thinking about group norms. Future research on children's thinking about groups and group norms should explore additional elements of this topic, including aspects of typicality beyond loyalty. © 2014 The British Psychological Society.

  17. Walking Ahead: The Headed Social Force Model.

    Directory of Open Access Journals (Sweden)

    Francesco Farina

    Full Text Available Human motion models are finding an increasing number of novel applications in many different fields, such as building design, computer graphics and robot motion planning. The Social Force Model is one of the most popular alternatives to describe the motion of pedestrians. By resorting to a physical analogy, individuals are assimilated to point-wise particles subject to social forces which drive their dynamics. Such a model implicitly assumes that humans move isotropically. On the contrary, empirical evidence shows that people do have a preferred direction of motion, walking forward most of the time. Lateral motions are observed only in specific circumstances, such as when navigating in overcrowded environments or avoiding unexpected obstacles. In this paper, the Headed Social Force Model is introduced in order to improve the realism of the trajectories generated by the classical Social Force Model. The key feature of the proposed approach is the inclusion of the pedestrians' heading into the dynamic model used to describe the motion of each individual. The force and torque representing the model inputs are computed as suitable functions of the force terms resulting from the traditional Social Force Model. Moreover, a new force contribution is introduced in order to model the behavior of people walking together as a single group. The proposed model features high versatility, being able to reproduce both the unicycle-like trajectories typical of people moving in open spaces and the point-wise motion patterns occurring in high density scenarios. Extensive numerical simulations show an increased regularity of the resulting trajectories and confirm a general improvement of the model realism.

  18. Physico-chemical properties and fertility status of some typic ...

    African Journals Online (AJOL)

    Physico-chemical properties and fertility status of some typic plinthaquults in bauchi loval government area of Bauchi state, Nigeria. S Mustapha. Abstract. No Abstract. IJOTAFS Vol. 1 (2) 2007: pp. 120-124. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  19. The influence of thematic congruency, typicality and divided attention on memory for radio advertisements.

    Science.gov (United States)

    Martín-Luengo, Beatriz; Luna, Karlos; Migueles, Malen

    2014-01-01

    We examined the effects of the thematic congruence between ads and the programme in which they are embedded. We also studied the typicality of the to-be-remembered information (high- and low-typicality elements), and the effect of divided attention in the memory for radio ad contents. Participants listened to four radio programmes with thematically congruent and incongruent ads embedded, and completed a true/false recognition test indicating the level of confidence in their answer. Half of the sample performed an additional task (divided attention group) while listening to the radio excerpts. In general, recognition memory was better for incongruent ads and low-typicality statements. Confidence in hits was higher in the undivided attention group, although there were no differences in performance. Our results suggest that the widespread idea of embedding ads into thematic-congruent programmes negatively affects memory for ads. In addition, low-typicality features that are usually highlighted by advertisers were better remembered than typical contents. Finally, metamemory evaluations were influenced by the inference that memory should be worse if we do several things at the same time.

  20. Using AGWA and the KINEROS2 Model-to-Model Green Infrastructure in Two Typical Residential Lots in Prescott, AZ

    Science.gov (United States)

    The Automated Geospatial Watershed Assessment (AGWA) Urban tool provides a step-by-step process to model subdivisions using the KINEROS2 model, with and without Green Infrastructure (GI) practices. AGWA utilizes the Kinematic Runoff and Erosion (KINEROS2) model, an event driven, ...

  1. Typical Vine or International Taste: Wine Consumers' Dilemma Between Beliefs and Preferences.

    Science.gov (United States)

    Scozzafava, Gabriele; Boncinelli, Fabio; Contini, Caterina; Romano, Caterina; Gerini, Francesca; Casini, Leonardo

    2016-01-01

    The wine-growing sector is probably one of the agricultural areas where the ties between product quality and territory are most evident. Geographical indication is a key element in this context, and previous literature has focused on demonstrating how certification of origin influences the wine purchaser's behavior. However, less attention has been devoted to understanding how the value of a given name of origin may or may not be determined by the various elements that characterize the typicality of the wine product on that territory: vines, production techniques, etc. It thus seems interesting, in this framework, to evaluate the impacts of several characteristic attributes on the preferences of consumers. This paper will analyze, in particular, the role of the presence of autochthonous vines in consumers' choices. The connection between name of origin and autochthonous vines appears to be particularly important in achieving product "recognisability", while introducing "international" vines in considerable measure into blends might result in the loss of the peculiarity of certain characteristic and typical local productions. A standardization of taste could thus risk compromising the reputation of traditional production areas. The objective of this study is to estimate, through an experimental auction on the case study of Chianti, the differences in willingness to pay for wines produced with different shares of typical vines. The results show that consumers have a willingness to pay for wine produced with typical blends 34% greater than for wines with international blends. However, this difference is not confirmed by blind tasting, raising the issue of the relationship between exante expectations about vine typicality and real wine sensorial characteristics. Finally, some recent patents related to wine testing and wine packaging are reviewed.

  2. One for all: The effect of extinction stimulus typicality on return of fear.

    Science.gov (United States)

    Scheveneels, Sara; Boddez, Yannick; Bennett, Marc Patrick; Hermans, Dirk

    2017-12-01

    During exposure therapy, patients are encouraged to approach the feared stimulus, so they can experience that this stimulus is not followed by the anticipated aversive outcome. However, patients might treat the absence of the aversive outcome as an 'exception to the rule'. This could hamper the generalization of fear reduction when the patient is confronted with similar stimuli not used in therapy. We examined the effect of providing information about the typicality of the extinction stimulus on the generalization of extinction to a new but similar stimulus. In a differential fear conditioning procedure, an animal-like figure was paired with a brief electric shock to the wrist. In a subsequent extinction phase, a different but perceptually similar animal-like figure was presented without the shock. Before testing the generalization of extinction with a third animal-like figure, participants were either instructed that the extinction stimulus was a typical or an atypical member of the animal family. The typicality instruction effectively impacted the generalization of extinction; the third animal-like figure elicited lower shock expectancies in the typical relative to the atypical group. Skin conductance data mirrored these results, but did not reach significance. These findings suggest that verbal information about stimulus typicality can be a promising adjunctive to standard exposure treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Characteristics of typical Pierce guns for PPM focused TWTs

    International Nuclear Information System (INIS)

    Harper, R.; Puri, M.P.

    1989-01-01

    The performance of typical moderate perveance Pierce type electron guns which are used in periodic permanent magnet focused traveling wave tubes are described with regard to adaptation for use in electron beam ion sources. The results of detailed electron trajectory computations for one particular gun design are presented

  4. Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.

    Science.gov (United States)

    Mi, Gu; Di, Yanming; Schafer, Daniel W

    2015-01-01

    This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.

  5. Unified Description of the Mechanical Properties of Typical Marine Soil and Its Application

    Directory of Open Access Journals (Sweden)

    Yongqiang Li

    2017-01-01

    Full Text Available This study employed a modified elastoplastic constitutive model that can systematically describe the monotonic and cyclic mechanical behaviors of typical marine soils combining the subloading, normal, and superloading yield surfaces, in the seismic response analysis of three-dimensional (3D marine site. New evolution equations for stress-induced anisotropy development and the change in the overconsolidation of soils were proposed. This model can describe the unified behaviour of unsaturated soil and saturated soil using independent state variables and can uniquely describe the multiple mechanical properties of soils under general stress states, without changing the parameter values using the transform stress method. An effective stress-based, fully coupled, explicit finite element–finite difference method was established based on this model and three-phase field theory. A finite deformation analysis was presented by introducing the Green-Naghdi rate tensor. The simulation and analysis indicated that the proposed method was sufficient for simulating the seismic disaster process of 3D marine sites. The results suggested that the ground motion intensity would increase due to the local uneven complex topography and site effect and also provided the temporal and spatial distribution of landslide and collapse at the specific location of the marine site.

  6. Assessments of conditioned radioactive waste arisings from existing and committed nuclear installations and assuming a moderate growth in nuclear electricity generation - June 1985

    International Nuclear Information System (INIS)

    Fairclough, M.P.; Goodill, D.R.; Tymons, B.J.

    1985-03-01

    This report describes an assessment of conditioned radioactive waste arisings from existing and committed nuclear installations, DOE Revised Scheme 1, and from an assumed nuclear power generation scenario, DOE Revised Scheme 3, representing a moderate growth in nuclear generation. Radioactive waste arise from 3 main groups of installations and activities: i. existing and committed commercial reactors; ii. fuel reprocessing plants, iii. research, industrial and medical activities. Stage 2 decommissioning wastes are considered together with WAGR decommissioning and the 1983 Sea Dump Consignment. The study uses the SIMULATION 2 code which models waste material flows through a system of waste treatment and packaging to disposal. With a knowledge of the accumulations and average production rates of untreated wastes and their isotopic compositions (or total activities), the rates at which conditioned wastes become available for transportation and disposal are calculated, with specific activity levels. The data for the inventory calculations have previously been documented. Some recent revisions and assumptions concerning future operation of nuclear facilities are presented in this report. (author)

  7. AutoRoute Rapid Flood Inundation Model

    Science.gov (United States)

    2013-03-01

    cross-section data does not exist. As such, the AutoRoute model is not meant to be as accurate as models such as HEC - RAS (U.S. Army Engineer...such as HEC - RAS assume that the defined low point of cross sections must be connected. However, in this approach the channel is assumed to be defined...Res. 33(2): 309-319. U.S. Army Engineer Hydrologic Engineering Center. 2010. “ HEC - RAS : River Analysis System, User’s Manual, Version 4.1.” Davis

  8. Modelling inorganic nitrogen in runoff: Seasonal dynamics at four European catchments as simulated by the MAGIC model.

    Science.gov (United States)

    Oulehle, F; Cosby, B J; Austnes, K; Evans, C D; Hruška, J; Kopáček, J; Moldan, F; Wright, R F

    2015-12-01

    Nitrogen (N) deposition is globally considered as a major threat to ecosystem functioning with important consequences for biodiversity, carbon sequestration and N retention. Lowered N retention as manifested by elevated concentrations of inorganic N in surface waters indicates ecosystem N saturation. Nitrate (NO3) concentrations in runoff from semi-natural catchments typically show an annual cycle, with low concentrations during the summer and high concentrations during the winter. Process-oriented catchment-scale biogeochemical models provide tools for simulation and testing changes in surface water and soil chemistry in response to changes in sulphur (S) and N deposition and climate. Here we examine the ability of MAGIC to simulate the observed monthly as well as the long-term trends over 10-35 years of inorganic N concentrations in streamwaters from four monitored headwater catchments in Europe: Čertovo Lake in the Czech Republic, Afon Gwy at Plynlimon, UK, Storgama, Norway and G2 NITREX at Gårdsjön, Sweden. The balance between N inputs (mineralization+deposition) and microbial immobilization and plant uptake defined the seasonal pattern of NO3 leaching. N mineralization and N uptake were assumed to be governed by temperature, described by Q10 functions. Seasonality in NO3 concentration and fluxes were satisfactorily reproduced at three sites (R2 of predicted vs. modelled concentrations varied between 0.32 and 0.47 and for fluxes between 0.36 and 0.88). The model was less successful in reproducing the observed NO3 concentrations and fluxes at the experimental N addition site G2 NITREX (R2=0.01 and R2=0.19, respectively). In contrast to the three monitored sites, Gårdsjön is in a state of change from a N-limited to N-rich ecosystem due to 20 years of experimental N addition. At Gårdsjön the measured NO3 seasonal pattern did not follow typical annual cycle for reasons which are not well understood, and thus not simulated by the model. The MAGIC model is

  9. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  10. Gestures in Prelinguistic Turkish Children with Autism, Down Syndrome, and Typically Developing Children

    Science.gov (United States)

    Toret, Gokhan; Acarlar, Funda

    2011-01-01

    The purpose of this study was to examine gesture use in Turkish children with autism, Down syndrome, and typically developing children. Participants included 30 children in three groups: Ten children with Down syndrome, ten children with autism between 24-60 months of age, and ten typically developing children between 12-18 months of age.…

  11. Face-to-Face Interference in Typical and Atypical Development

    Science.gov (United States)

    Riby, Deborah M.; Doherty-Sneddon, Gwyneth; Whittle, Lisa

    2012-01-01

    Visual communication cues facilitate interpersonal communication. It is important that we look at faces to retrieve and subsequently process such cues. It is also important that we sometimes look away from faces as they increase cognitive load that may interfere with online processing. Indeed, when typically developing individuals hold face gaze…

  12. Modeling of LH current drive in self-consistent elongated tokamak MHD equilibria

    International Nuclear Information System (INIS)

    Blackfield, D.T.; Devoto, R.S.; Fenstermacher, M.E.; Bonoli, P.T.; Porkolab, M.; Yugo, J.

    1989-01-01

    Calculations of non-inductive current drive typically have been used with model MHD equilibria which are independently generated from an assumed toroidal current profile or from a fit to an experiment. Such a method can lead to serious errors since the driven current can dramatically alter the equilibrium and changes in the equilibrium B-fields can dramatically alter the current drive. The latter effect is quite pronounced in LH current drive where the ray trajectories are sensitive to the local values of the magnetic shear and the density gradient. In order to overcome these problems, we have modified a LH simulation code to accommodate elongated plasmas with numerically generated equilibria. The new LH module has been added to the ACCOME code which solves for current drive by neutral beams, electric fields, and bootstrap effects in a self-consistent 2-D equilibrium. We briefly describe the model in the next section and then present results of a study of LH current drive in ITER. 2 refs., 6 figs., 2 tabs

  13. Modeling of annular two-phase flow using a unified CFD approach

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haipeng, E-mail: haipengl@kth.se; Anglart, Henryk, E-mail: henryk@kth.se

    2016-07-15

    Highlights: • Annular two-phase flow has been modeled using a unified CFD approach. • Liquid film was modeled based on a two-dimensional thin film assumption. • Both Eulerian and Lagrangian methods were employed for the gas core flow modeling. - Abstract: A mechanistic model of annular flow with evaporating liquid film has been developed using computational fluid dynamics (CFD). The model is employing a separate solver with two-dimensional conservation equations to predict propagation of a thin boiling liquid film on solid walls. The liquid film model is coupled to a solver of three-dimensional conservation equations describing the gas core, which is assumed to contain a saturated mixture of vapor and liquid droplets. Both the Eulerian–Eulerian and the Eulerian–Lagrangian approach are used to describe the droplet and vapor motion in the gas core. All the major interaction phenomena between the liquid film and the gas core flow have been accounted for, including the liquid film evaporation as well as the droplet deposition and entrainment. The resultant unified framework for annular flow has been applied to the steam-water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show good agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate.

  14. Modeling of annular two-phase flow using a unified CFD approach

    International Nuclear Information System (INIS)

    Li, Haipeng; Anglart, Henryk

    2016-01-01

    Highlights: • Annular two-phase flow has been modeled using a unified CFD approach. • Liquid film was modeled based on a two-dimensional thin film assumption. • Both Eulerian and Lagrangian methods were employed for the gas core flow modeling. - Abstract: A mechanistic model of annular flow with evaporating liquid film has been developed using computational fluid dynamics (CFD). The model is employing a separate solver with two-dimensional conservation equations to predict propagation of a thin boiling liquid film on solid walls. The liquid film model is coupled to a solver of three-dimensional conservation equations describing the gas core, which is assumed to contain a saturated mixture of vapor and liquid droplets. Both the Eulerian–Eulerian and the Eulerian–Lagrangian approach are used to describe the droplet and vapor motion in the gas core. All the major interaction phenomena between the liquid film and the gas core flow have been accounted for, including the liquid film evaporation as well as the droplet deposition and entrainment. The resultant unified framework for annular flow has been applied to the steam-water flow with conditions typical for a Boiling Water Reactor (BWR). The simulation results for the liquid film flow rate show good agreement with the experimental data, with the potential to predict the dryout occurrence based on criteria of critical film thickness or critical film flow rate.

  15. Studies on modeling to failed fuel detection system response in LMFBR

    International Nuclear Information System (INIS)

    Miyazawa, T.; Saji, G.; Mitsuzuku, N.; Hikichi, T.; Odo, T.; Rindo, H.

    1981-05-01

    Failed Fuel Detection (FFD) system with Fission Products (FP) detection is considered to be the most promissing method, since FP provides direct information against fuel element failure. For designing FFD system and for evaluating FFD signals, some adequate FFD signal response to fuel failure have been required. But few models are available in nowadays. Thus Power Reactor and Nuclear Fuel Development Corporation (PNC) had developed FFD response model with computer codes, based on several fundamental investigations on FP release and FP behavior, and referred to foreign country experiences on fuel failure. In developing the model, noble gas and halogen FP release and behavior were considered, since FFD system would be composed of both cover gas monitoring and delayed neutron monitoring. The developed model can provide typical fuel failure response and detection limit which depends on various background signals at cover gas monitoring and delayed neutron monitoring. According to the FFD response model, we tried to assume fuel failure response and detection limit at Japan experimental fast reactor ''JOYO''. The detection limit of JOYO FFD system was estimated by measuring the background signals. Followed on the studies, a complete computer code has been now made with some improvement. On the paper, the details of the model, out line of developed computer code, status of JOYO FFD system, and trial assumption of JOYO FFD response and detection limit. (author)

  16. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    Science.gov (United States)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  17. Critical infrastructure protection decision support system decision model : overview and quick-start user's guide.

    Energy Technology Data Exchange (ETDEWEB)

    Samsa, M.; Van Kuiken, J.; Jusko, M.; Decision and Information Sciences

    2008-12-01

    The Critical Infrastructure Protection Decision Support System Decision Model (CIPDSS-DM) is a useful tool for comparing the effectiveness of alternative risk-mitigation strategies on the basis of CIPDSS consequence scenarios. The model is designed to assist analysts and policy makers in evaluating and selecting the most effective risk-mitigation strategies, as affected by the importance assigned to various impact measures and the likelihood of an incident. A typical CIPDSS-DM decision map plots the relative preference of alternative risk-mitigation options versus the annual probability of an undesired incident occurring once during the protective life of the investment, assumed to be 20 years. The model also enables other types of comparisons, including a decision map that isolates a selected impact variable and displays the relative preference for the options of interest--parameterized on the basis of the contribution of the isolated variable to total impact, as well as the likelihood of the incident. Satisfaction/regret analysis further assists the analyst or policy maker in evaluating the confidence with which one option can be selected over another.

  18. Attention and Word Learning in Autistic, Language Delayed and Typically Developing Children

    Directory of Open Access Journals (Sweden)

    Elena eTenenbaum

    2014-05-01

    Full Text Available Previous work has demonstrated that patterns of social attention hold predictive value for language development in typically developing infants. The goal of this research was to explore how patterns of attention in autistic, language delayed, and typically developing children relate to early word learning and language abilities. We tracked patterns of eye movements to faces and objects while children watched videos of a woman teaching them a series of new words. Subsequent test trials measured participants’ recognition of these novel word-object pairings. Results indicated that greater attention to the speaker’s mouth was related to higher scores on standardized measures of language development for autistic and typically developing children (but not for language delayed children. This effect was mediated by age for typically developing, but not autistic children. When effects of age were controlled for, attention to the mouth among language delayed participants was negatively correlated with standardized measures of language learning. Attention to the speaker’s mouth and eyes while she was teaching the new words was also predictive of faster recognition of the newly learned words among autistic children. These results suggest that language delays among children with autism may be driven in part by aberrant social attention, and that the mechanisms underlying these delays may differ from those in language delayed participants without autism.

  19. Probabilistic Harmonic Analysis on Distributed Photovoltaic Integration Considering Typical Weather Scenarios

    Science.gov (United States)

    Bin, Che; Ruoying, Yu; Dongsheng, Dang; Xiangyan, Wang

    2017-05-01

    Distributed Generation (DG) integrating to the network would cause the harmonic pollution which would cause damages on electrical devices and affect the normal operation of power system. On the other hand, due to the randomness of the wind and solar irradiation, the output of DG is random, too, which leads to an uncertainty of the harmonic generated by the DG. Thus, probabilistic methods are needed to analyse the impacts of the DG integration. In this work we studied the harmonic voltage probabilistic distribution and the harmonic distortion in distributed network after the distributed photovoltaic (DPV) system integrating in different weather conditions, mainly the sunny day, cloudy day, rainy day and the snowy day. The probabilistic distribution function of the DPV output power in different typical weather conditions could be acquired via the parameter identification method of maximum likelihood estimation. The Monte-Carlo simulation method was adopted to calculate the probabilistic distribution of harmonic voltage content at different frequency orders as well as the harmonic distortion (THD) in typical weather conditions. The case study was based on the IEEE33 system and the results of harmonic voltage content probabilistic distribution as well as THD in typical weather conditions were compared.

  20. Suspended Sediment Dynamics in the Macrotidal Seine Estuary (France): 2. Numerical Modeling of Sediment Fluxes and Budgets Under Typical Hydrological and Meteorological Conditions

    Science.gov (United States)

    Schulz, E.; Grasso, F.; Le Hir, P.; Verney, R.; Thouvenin, B.

    2018-01-01

    Understanding the sediment dynamics in an estuary is important for its morphodynamic and ecological assessment as well as, in case of an anthropogenically controlled system, for its maintenance. However, the quantification of sediment fluxes and budgets is extremely difficult from in-situ data and requires thoroughly validated numerical models. In the study presented here, sediment fluxes and budgets in the lower Seine Estuary were quantified and investigated from seasonal to annual time scales with respect to realistic hydro- and meteorological conditions. A realistic three-dimensional process-based hydro- and sediment-dynamic model was used to quantify mud and sand fluxes through characteristic estuarine cross-sections. In addition to a reference experiment with typical forcing, three experiments were carried out and analyzed, each differing from the reference experiment in either river discharge or wind and waves so that the effects of these forcings could be separated. Hydro- and meteorological conditions affect the sediment fluxes and budgets in different ways and at different locations. Single storm events induce strong erosion in the lower estuary and can have a significant effect on the sediment fluxes offshore of the Seine Estuary mouth, with the flux direction depending on the wind direction. Spring tides cause significant up-estuary fluxes at the mouth. A high river discharge drives barotropic down-estuary fluxes at the upper cross-sections, but baroclinic up-estuary fluxes at the mouth and offshore so that the lower estuary gains sediment during wet years. This behavior is likely to be observed worldwide in estuaries affected by density gradients and turbidity maximum dynamics.

  1. Non-linear thermal and structural analysis of a typical spent fuel silo

    International Nuclear Information System (INIS)

    Alvarez, L.M.; Mancini, G.R.; Spina, O.A.F.; Sala, G.; Paglia, F.

    1993-01-01

    A numerical method for the non-linear structural analysis of a typical reinforced concrete spent fuel silo under thermal loads is proposed. The numerical time integration was performed by means of a time explicit axisymmetric finite-difference numerical operator. An analysis was made of influences by heat, viscoelasticity and cracking upon the concrete behaviour between concrete pouring stage and the first period of the silo's normal operation. The following parameters were considered for the heat generation and transmission process: Heat generated during the concrete's hardening stage, Solar radiation effects, Natural convection, Spent-fuel heat generation. For the modelling of the reinforced concrete behaviour, use was made of a simplified formulation of: Visco-elastic effects, Thermal cracking, Steel reinforcement. A comparison between some experimental temperature characteristic values obtained from the numerical integration process and empirical data obtained from a 1:1 scaled prototype was also carried out. (author)

  2. Scaling and constitutive relationships in downcomer modeling

    International Nuclear Information System (INIS)

    Daly, B.J.; Harlow, F.H.

    1978-12-01

    Constitutive relationships to describe mass and momentum exchange in multiphase flow in a pressurized water reactor downcomer are presented. Momentum exchange between the phases is described by the product of the flux of momentum available for exchange and the effective area for interaction. The exchange of mass through condensation is assumed to occur along a distinct condensation boundary separating steam at saturation temperature from water in which the temperature falls off roughly linearly with distance from the boundary. Because of the abundance of nucleation sites in a typical churning flow in a downcomer, we propose an equilibrium evaporation process that produces sufficient steam per unit time to keep the water perpetually cooled to the saturation temperature. The transport equations, constitutive models, and boundary conditions used in the K-TIF numerical method are nondimensionalized to obtain scaling relationships for two-phase flow in the downcomer. The results indicate that, subject to idealized thermodynamic and hydraulic constraints, exact mathematical scaling can be achieved. Experiments are proposed to isolate the effects of parameters that contribute to mass, momentum, and energy exchange between the phases

  3. Relationships between protein-encoding gene abundance and corresponding process are commonly assumed yet rarely observed

    Science.gov (United States)

    Rocca, Jennifer D.; Hall, Edward K.; Lennon, Jay T.; Evans, Sarah E.; Waldrop, Mark P.; Cotner, James B.; Nemergut, Diana R.; Graham, Emily B.; Wallenstein, Matthew D.

    2015-01-01

    For any enzyme-catalyzed reaction to occur, the corresponding protein-encoding genes and transcripts are necessary prerequisites. Thus, a positive relationship between the abundance of gene or transcripts and corresponding process rates is often assumed. To test this assumption, we conducted a meta-analysis of the relationships between gene and/or transcript abundances and corresponding process rates. We identified 415 studies that quantified the abundance of genes or transcripts for enzymes involved in carbon or nitrogen cycling. However, in only 59 of these manuscripts did the authors report both gene or transcript abundance and rates of the appropriate process. We found that within studies there was a significant but weak positive relationship between gene abundance and the corresponding process. Correlations were not strengthened by accounting for habitat type, differences among genes or reaction products versus reactants, suggesting that other ecological and methodological factors may affect the strength of this relationship. Our findings highlight the need for fundamental research on the factors that control transcription, translation and enzyme function in natural systems to better link genomic and transcriptomic data to ecosystem processes.

  4. Is the Perception of 3D Shape from Shading Based on Assumed Reflectance and Illumination?

    Directory of Open Access Journals (Sweden)

    James T. Todd

    2014-10-01

    Full Text Available The research described in the present article was designed to compare three types of image shading: one generated with a Lambertian BRDF and homogeneous illumination such that image intensity was determined entirely by local surface orientation irrespective of position; one that was textured with a linear intensity gradient, such that image intensity was determined entirely by local surface position irrespective of orientation; and another that was generated with a Lambertian BRDF and inhomogeneous illumination such that image intensity was influenced by both position and orientation. A gauge figure adjustment task was used to measure observers' perceptions of local surface orientation on the depicted surfaces, and the probe points included 60 pairs of regions that both had the same orientation. The results show clearly that observers' perceptions of these three types of stimuli were remarkably similar, and that probe regions with similar apparent orientations could have large differences in image intensity. This latter finding is incompatible with any process for computing shape from shading that assumes any plausible reflectance function combined with any possible homogeneous illumination.

  5. Inversion assuming weak scattering

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus

    2013-01-01

    due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...

  6. Analogical Reasoning Ability in Autistic and Typically Developing Children

    Science.gov (United States)

    Morsanyi, Kinga; Holyoak, Keith J.

    2010-01-01

    Recent studies (e.g. Dawson et al., 2007) have reported that autistic people perform in the normal range on the Raven Progressive Matrices test, a formal reasoning test that requires integration of relations as well as the ability to infer rules and form high-level abstractions. Here we compared autistic and typically developing children, matched…

  7. Dual Processing Model for Medical Decision-Making: An Extension to Diagnostic Testing.

    Science.gov (United States)

    Tsalatsanis, Athanasios; Hozo, Iztok; Kumar, Ambuj; Djulbegovic, Benjamin

    2015-01-01

    Dual Processing Theories (DPT) assume that human cognition is governed by two distinct types of processes typically referred to as type 1 (intuitive) and type 2 (deliberative). Based on DPT we have derived a Dual Processing Model (DPM) to describe and explain therapeutic medical decision-making. The DPM model indicates that doctors decide to treat when treatment benefits outweigh its harms, which occurs when the probability of the disease is greater than the so called "threshold probability" at which treatment benefits are equal to treatment harms. Here we extend our work to include a wider class of decision problems that involve diagnostic testing. We illustrate applicability of the proposed model in a typical clinical scenario considering the management of a patient with prostate cancer. To that end, we calculate and compare two types of decision-thresholds: one that adheres to expected utility theory (EUT) and the second according to DPM. Our results showed that the decisions to administer a diagnostic test could be better explained using the DPM threshold. This is because such decisions depend on objective evidence of test/treatment benefits and harms as well as type 1 cognition of benefits and harms, which are not considered under EUT. Given that type 1 processes are unique to each decision-maker, this means that the DPM threshold will vary among different individuals. We also showed that when type 1 processes exclusively dominate decisions, ordering a diagnostic test does not affect a decision; the decision is based on the assessment of benefits and harms of treatment. These findings could explain variations in the treatment and diagnostic patterns documented in today's clinical practice.

  8. Dual Processing Model for Medical Decision-Making: An Extension to Diagnostic Testing.

    Directory of Open Access Journals (Sweden)

    Athanasios Tsalatsanis

    Full Text Available Dual Processing Theories (DPT assume that human cognition is governed by two distinct types of processes typically referred to as type 1 (intuitive and type 2 (deliberative. Based on DPT we have derived a Dual Processing Model (DPM to describe and explain therapeutic medical decision-making. The DPM model indicates that doctors decide to treat when treatment benefits outweigh its harms, which occurs when the probability of the disease is greater than the so called "threshold probability" at which treatment benefits are equal to treatment harms. Here we extend our work to include a wider class of decision problems that involve diagnostic testing. We illustrate applicability of the proposed model in a typical clinical scenario considering the management of a patient with prostate cancer. To that end, we calculate and compare two types of decision-thresholds: one that adheres to expected utility theory (EUT and the second according to DPM. Our results showed that the decisions to administer a diagnostic test could be better explained using the DPM threshold. This is because such decisions depend on objective evidence of test/treatment benefits and harms as well as type 1 cognition of benefits and harms, which are not considered under EUT. Given that type 1 processes are unique to each decision-maker, this means that the DPM threshold will vary among different individuals. We also showed that when type 1 processes exclusively dominate decisions, ordering a diagnostic test does not affect a decision; the decision is based on the assessment of benefits and harms of treatment. These findings could explain variations in the treatment and diagnostic patterns documented in today's clinical practice.

  9. Sludge, biosolids, and the propaganda model of communication.

    Science.gov (United States)

    Rampton, Sheldon

    2002-01-01

    The Water Environment Federation's elaborate effort to rename sewage sludge as "biosolids" is an example in practice of the "propaganda model" of communications, which sees its task as indoctrinating target audiences with ideas favorable to the interests of the communicators. The propaganda model assumes that members of the public are irrational and focuses therefore on symbolic and emotional aspects of communication. This approach to communicating arouses public resentment rather than trust. In place of a "propaganda model," public officials should adopt a "democratic model," which assumes that audiences are rational and intellectually capable of meaningful participation in decision-making.

  10. 16 CFR Figure 5 to Part 1610 - An Example of a Typical Gas Shield

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false An Example of a Typical Gas Shield 5 Figure 5 to Part 1610 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... Example of a Typical Gas Shield ER25MR08.004 ...

  11. 16 CFR Figure 4 to Part 1610 - An Example of a Typical Indicator Finger

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false An Example of a Typical Indicator Finger 4 Figure 4 to Part 1610 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... Example of a Typical Indicator Finger ER25MR08.003 ...

  12. The Roots of Disillusioned American Dream in Typical American

    Institute of Scientific and Technical Information of China (English)

    古冬华

    2016-01-01

    Typical American is one of Gish Jen’s notable novels catching attention of the American literary circle. The motif of disillusioned American dream can be seen clearly through the experiences of three main characters. From perspectives of the consumer culture and cultural conflicts, this paper analyzes the roots of the disillusioned American dream in the novel.

  13. Typical Versus Atypical Anorexia Nervosa Among Adolescents: Clinical Characteristics and Implications for ICD-11.

    Science.gov (United States)

    Silén, Yasmina; Raevuori, Anu; Jüriloo, Elisabeth; Tainio, Veli-Matti; Marttunen, Mauri; Keski-Rahkonen, Anna

    2015-09-01

    There is scant research on the clinical utility of differentiating International Classification of Diseases (ICD) 10 diagnoses F50.0 anorexia nervosa (typical AN) and F50.1 atypical anorexia. We reviewed systematically records of 47 adolescents who fulfilled criteria for ICD-10 F50.0 (n = 34) or F50.1 (n = 13), assessing the impact of diagnostic subtype, comorbidity, background factors and treatment choices on recovery. Atypical AN patients were significantly older (p = 0.03), heavier (minimum body mass index 16.7 vs 15.1 kg/m(2) , p = 0.003) and less prone to comorbidities (38% vs 71%, p = 0.04) and had shorter, less intensive and less costly treatments than typical AN patients. The diagnosis of typical versus atypical AN was the sole significant predictor of treatment success: recovery from atypical AN was 4.3 times (95% confidence interval [1.1, 17.5]) as likely as recovery from typical AN. Overall, our findings indicate that a broader definition of AN may dilute the prognostic value of the diagnosis, and therefore, ICD-11 should retain its distinction between typical and atypical AN. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  14. The importance of being 'well-placed': the influence of context on perceived typicality and esthetic appraisal of product appearance.

    Science.gov (United States)

    Blijlevens, Janneke; Gemser, Gerda; Mugge, Ruth

    2012-01-01

    Earlier findings have suggested that esthetic appraisal of product appearances is influenced by perceived typicality. However, prior empirical research on typicality and esthetic appraisal of product appearances has not explicitly taken context effects into account. In this paper, we investigate how a specific context influences perceived typicality and thus the esthetic appraisal of product appearances by manipulating the degree of typicality of a product's appearance and its context. The findings of two studies demonstrate that the perceived typicality of a product appearance and consequently its esthetic appraisal vary depending on the typicality of the context in which the product is presented. Specifically, contrast effects occur for product appearances that are perceived as typical. Typical product appearances are perceived as more typical and are more esthetically appealing when presented in an atypical context compared to when presented in a typical context. No differences in perceived typicality and esthetic appraisal were found for product appearances that are perceived as atypical. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Emotion dysregulation and dyadic conflict in depressed and typical adolescents: Evaluating concordance across psychophysiological and observational measures

    Science.gov (United States)

    Crowell, Sheila E.; Baucom, Brian R.; Yaptangco, Mona; Bride, Daniel; Hsiao, Ray; McCauley, Elizabeth; Beauchaine, Theodore P.

    2014-01-01

    Many depressed adolescents experience difficulty regulating their emotions. These emotion regulation difficulties appear to emerge in part from socialization processes within families and then generalize to other contexts. However, emotion dysregulation is typically assessed within the individual, rather than in the social relationships that shape and maintain dysregulation. In this study, we evaluated concordance of physiological and observational measures of emotion dysregulation during interpersonal conflict, using a multilevel actor-partner interdependence model (APIM). Participants were 75 mother-daughter dyads, including 50 depressed adolescents with or without a history of self-injury, and 25 typically developing controls. Behavior dysregulation was operationalized as observed aversiveness during a conflict discussion, and physiological dysregulation was indexed by respiratory sinus arrhythmia (RSA). Results revealed different patterns of concordance for control versus depressed participants. Controls evidenced a concordant partner (between-person) effect, and showed increased physiological regulation during minutes when their partner was more aversive. In contrast, clinical dyad members displayed a concordant actor (within-person) effect, becoming simultaneously physiologically and behaviorally dysregulated. Results inform current understanding of emotion dysregulation across multiple levels of analysis. PMID:24607894

  16. Matrix-algebra-based calculations of the time evolution of the binary spin-bath model for magnetization transfer.

    Science.gov (United States)

    Müller, Dirk K; Pampel, André; Möller, Harald E

    2013-05-01

    Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Sibling negotiation

    OpenAIRE

    Rufus A. Johnstone; Alexandre Roulin

    2003-01-01

    Current discussions of offspring begging typically assume either that it is a signal directed at parents or that it represents a form of scramble competition to gain access to them. However, offspring might also display to inform nest mates that they will contest the next food item to be delivered; in other words, begging (possibly in the absence of parents) might serve purely as a form of negotiation among siblings. Here, we develop a game-theoretical model of this possibility. We assume tha...

  18. Monthly sediment discharge changes and estimates in a typical karst catchment of southwest China

    Science.gov (United States)

    Li, Zhenwei; Xu, Xianli; Xu, Chaohao; Liu, Meixian; Wang, Kelin; Yi, Ruzhou

    2017-12-01

    As one of the largest karst regions in the world, southwest China is experiencing severe soil erosion due to its special geological conditions, inappropriate land use, and lower soil loss tolerance. Knowledge and accurate estimations of changes in sediment discharge rates is important for finding potential measures to effectively control sediment delivery. This study investigated temporal variation in monthly sediment discharge (SD), and developed sediment rating curves and state-space model to estimate SD. Monthly water discharge, SD, precipitation, potential evapotranspiration, and normalized differential vegetation index during 2003-2015 collected from a typical karst catchment of Yujiang River were analyzed in present study. A Mann-Kendal test and Morlet wavelet analysis were employed to detect the changes in SD. Results indicated that a decreasing trend was observed in sediment discharge at monthly and annual scale. The water and sediment discharge both had a significant 1-year period, implying that water discharge has substantial influence on SD. The best state-space model using water discharge was a simple but effective model, accounting for 99% of the variation in SD. The sediment rating curves, however, represented only 78% of the variation in SD. This study provides an insight into the possibility of accurate estimation of SD only using water discharge with state-space model approach. State-space model is recommended as an effective approach for quantifying the temporal relationships between SD and its driving factors in karst regions of southwest China.

  19. A Model-Based Systems Engineering Methodology for Employing Architecture In System Analysis: Developing Simulation Models Using Systems Modeling Language Products to Link Architecture and Analysis

    Science.gov (United States)

    2016-06-01

    18 Figure 5 Spiral Model ...............................................................................................20 Figure 6...Memorandum No. 1. Tallahassee, FL: Florida Department of Transportation. 19 The spiral model of system development, first introduced in Boehm...system capabilities into the waterfall model would prove quite difficult, the spiral model assumes that available technologies will change over the

  20. Multiple imputation to account for measurement error in marginal structural models

    Science.gov (United States)

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  1. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  2. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    Science.gov (United States)

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  3. Radiative transfer model for contaminated rough slabs.

    Science.gov (United States)

    Andrieu, François; Douté, Sylvain; Schmidt, Frédéric; Schmitt, Bernard

    2015-11-01

    We present a semi-analytical model to simulate the bidirectional reflectance distribution function (BRDF) of a rough slab layer containing impurities. This model has been optimized for fast computation in order to analyze massive hyperspectral data by a Bayesian approach. We designed it for planetary surface ice studies but it could be used for other purposes. It estimates the bidirectional reflectance of a rough slab of material containing inclusions, overlaying an optically thick media (semi-infinite media or stratified media, for instance granular material). The inclusions are assumed to be close to spherical and constituted of any type of material other than the ice matrix. It can be any other type of ice, mineral, or even bubbles defined by their optical constants. We assume a low roughness and we consider the geometrical optics conditions. This model is thus applicable for inclusions larger than the considered wavelength. The scattering on the inclusions is assumed to be isotropic. This model has a fast computation implementation and thus is suitable for high-resolution hyperspectral data analysis.

  4. The Effect of Parenthood Education on Self- Efficacy and Parent Effectiveness in an Alternative High School Student Population

    Science.gov (United States)

    Meyer, Becky Weller; Jain, Sachin; Canfield-Davis, Kathy

    2011-01-01

    Adolescents defined as at-risk typically lack healthy models of parenting and receive no parenthood education prior to assuming the parenting role. Unless a proactive approach is implemented, the cyclic pattern of dysfunctional parenting-- including higher rates of teen pregnancy, increased childhood abuse, low educational attainment,…

  5. Previous experiences shape adaptive mate preferences

    NARCIS (Netherlands)

    Fawcett, Tim W.; Bleay, Colin

    2009-01-01

    Existing models of mate choice assume that individuals have perfect knowledge of their own ability to attract a mate and can adjust their preferences accordingly. However, real animals will typically be uncertain of their own attractiveness. A potentially useful source of information on this is the

  6. Modeling volatile organic compounds sorption on dry building materials using double-exponential model

    International Nuclear Information System (INIS)

    Deng, Baoqing; Ge, Di; Li, Jiajia; Guo, Yuan; Kim, Chang Nyung

    2013-01-01

    A double-exponential surface sink model for VOCs sorption on building materials is presented. Here, the diffusion of VOCs in the material is neglected and the material is viewed as a surface sink. The VOCs concentration in the air adjacent to the material surface is introduced and assumed to always maintain equilibrium with the material-phase concentration. It is assumed that the sorption can be described by mass transfer between the room air and the air adjacent to the material surface. The mass transfer coefficient is evaluated from the empirical correlation, and the equilibrium constant can be obtained by linear fitting to the experimental data. The present model is validated through experiments in small and large test chambers. The predicted results accord well with the experimental data in both the adsorption stage and desorption stage. The model avoids the ambiguity of model constants found in other surface sink models and is easy to scale up

  7. Introduction to Force-Dependent Kinematics: Theory and Application to Mandible Modeling.

    Science.gov (United States)

    Skipper Andersen, Michael; de Zee, Mark; Damsgaard, Michael; Nolte, Daniel; Rasmussen, John

    2017-09-01

    Knowledge of the muscle, ligament, and joint forces is important when planning orthopedic surgeries. Since these quantities cannot be measured in vivo under normal circumstances, the best alternative is to estimate them using musculoskeletal models. These models typically assume idealized joints, which are sufficient for general investigations but insufficient if the joint in focus is far from an idealized joint. The purpose of this study was to provide the mathematical details of a novel musculoskeletal modeling approach, called force-dependent kinematics (FDK), capable of simultaneously computing muscle, ligament, and joint forces as well as internal joint displacements governed by contact surfaces and ligament structures. The method was implemented into the anybody modeling system and used to develop a subject-specific mandible model, which was compared to a point-on-plane (POP) model and validated against joint kinematics measured with a custom-built brace during unloaded emulated chewing, open and close, and protrusion movements. Generally, both joint models estimated the joint kinematics well with the POP model performing slightly better (root-mean-square-deviation (RMSD) of less than 0.75 mm for the POP model and 1.7 mm for the FDK model). However, substantial differences were observed when comparing the estimated joint forces (RMSD up to 24.7 N), demonstrating the dependency on the joint model. Although the presented mandible model still contains room for improvements, this study shows the capabilities of the FDK methodology for creating joint models that take the geometry and joint elasticity into account.

  8. Comparing Levels of Mastery Motivation in Children with Cerebral Palsy (CP) and Typically Developing Children.

    Science.gov (United States)

    Salavati, Mahyar; Vameghi, Roshanak; Hosseini, Seyed Ali; Saeedi, Ahmad; Gharib, Masoud

    2018-02-01

    The present study aimed to compare motivation in school-age children with CP and typically developing children. 229 parents of children with cerebral palsy and 212 parents of typically developing children participated in the present cross sectional study and completed demographic and DMQ18 forms. The rest of information was measured by an occupational therapist. Average age was equal to 127.12±24.56 months for children with cerebral palsy (CP) and 128.08±15.90 for typically developing children. Independent t-test used to compare two groups; and Pearson correlation coefficient by SPSS software applied to study correlation with other factors. There were differences between DMQ subscales of CP and typically developing groups in terms of all subscales ( P Manual ability classification system (r=-0.782, P<0.001) and cognitive impairment (r=-0.161, P<0.05). Children with CP had lower mastery motivation than typically developing children. Rehabilitation efforts should take to enhance motivation, so that children felt empowered to do tasks or practices.

  9. Optimal probabilistic energy management in a typical micro-grid based-on robust optimization and point estimate method

    International Nuclear Information System (INIS)

    Alavi, Seyed Arash; Ahmadian, Ali; Aliakbar-Golkar, Masoud

    2015-01-01

    Highlights: • Energy management is necessary in the active distribution network to reduce operation costs. • Uncertainty modeling is essential in energy management studies in active distribution networks. • Point estimate method is a suitable method for uncertainty modeling due to its lower computation time and acceptable accuracy. • In the absence of Probability Distribution Function (PDF) robust optimization has a good ability for uncertainty modeling. - Abstract: Uncertainty can be defined as the probability of difference between the forecasted value and the real value. As this probability is small, the operation cost of the power system will be less. This purpose necessitates modeling of system random variables (such as the output power of renewable resources and the load demand) with appropriate and practicable methods. In this paper, an adequate procedure is proposed in order to do an optimal energy management on a typical micro-grid with regard to the relevant uncertainties. The point estimate method is applied for modeling the wind power and solar power uncertainties, and robust optimization technique is utilized to model load demand uncertainty. Finally, a comparison is done between deterministic and probabilistic management in different scenarios and their results are analyzed and evaluated

  10. Ecosystem responses to warming and watering in typical and desert steppes

    Science.gov (United States)

    Xu, Zhenzhu; Hou, Yanhui; Zhang, Lihua; Liu, Tao; Zhou, Guangsheng

    2016-10-01

    Global warming is projected to continue, leading to intense fluctuations in precipitation and heat waves and thereby affecting the productivity and the relevant biological processes of grassland ecosystems. Here, we determined the functional responses to warming and altered precipitation in both typical and desert steppes. The results showed that watering markedly increased the aboveground net primary productivity (ANPP) in a typical steppe during a drier year and in a desert steppe over two years, whereas warming manipulation had no significant effect. The soil microbial biomass carbon (MBC) and the soil respiration (SR) were increased by watering in both steppes, but the SR was significantly decreased by warming in the desert steppe only. The inorganic nitrogen components varied irregularly, with generally lower levels in the desert steppe. The belowground traits of soil total organic carbon (TOC) and the MBC were more closely associated with the ANPP in the desert than in the typical steppes. The results showed that the desert steppe with lower productivity may respond strongly to precipitation changes, particularly with warming, highlighting the positive effect of adding water with warming. Our study implies that the habitat- and year-specific responses to warming and watering should be considered when predicting an ecosystem’s functional responses under climate change scenarios.

  11. How Does the Electron Dynamics Affect the Reconnection Rate in a Typical Reconnection Layer?

    Science.gov (United States)

    Hesse, Michael

    2009-01-01

    The question of whether the microscale controls the macroscale or vice-versa remains one of the most challenging problems in plasmas. A particular topic of interest within this context is collisionless magnetic reconnection, where both points of views are espoused by different groups of researchers. This presentation will focus on this topic. We will begin by analyzing the properties of electron diffusion region dynamics both for guide field and anti-parallel reconnection, and how they can be scaled to different inflow conditions. As a next step, we will study typical temporal variations of the microscopic dynamics with the objective of understanding the potential for secular changes to the macroscopic system. The research will be based on a combination of analytical theory and numerical modeling.

  12. Linking Geomechanical Models with Observations of Microseismicity during CCS Operations

    Science.gov (United States)

    Verdon, J.; Kendall, J.; White, D.

    2012-12-01

    During CO2 injection for the purposes of carbon capture and storage (CCS), injection-induced fracturing of the overburden represents a key risk to storage integrity. Fractures in a caprock provide a pathway along which buoyant CO2 can rise and escape the storage zone. Therefore the ability to link field-scale geomechanical models with field geophysical observations is of paramount importance to guarantee secure CO2 storage. Accurate location of microseismic events identifies where brittle failure has occurred on fracture planes. This is a manifestation of the deformation induced by CO2 injection. As the pore pressure is increased during injection, effective stress is decreased, leading to inflation of the reservoir and deformation of surrounding rocks, which creates microseismicity. The deformation induced by injection can be simulated using finite-element mechanical models. Such a model can be used to predict when and where microseismicity is expected to occur. However, typical elements in a field scale mechanical models have decameter scales, while the rupture size for microseismic events are typically of the order of 1 square meter. This means that mapping modeled stress changes to predictions of microseismic activity can be challenging. Where larger scale faults have been identified, they can be included explicitly in the geomechanical model. Where movement is simulated along these discrete features, it can be assumed that microseismicity will occur. However, microseismic events typically occur on fracture networks that are too small to be simulated explicitly in a field-scale model. Therefore, the likelihood of microseismicity occurring must be estimated within a finite element that does not contain explicitly modeled discontinuities. This can be done in a number of ways, including the utilization of measures such as closeness on the stress state to predetermined failure criteria, either for planes with a defined orientation (the Mohr-Coulomb criteria) for

  13. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  14. Perfect-use and typical-use Pearl Index of a contraceptive mobile app.

    Science.gov (United States)

    Berglund Scherwitzl, E; Lundberg, O; Kopp Kallner, H; Gemzell Danielsson, K; Trussell, J; Scherwitzl, R

    2017-12-01

    The Natural Cycles application is a fertility awareness-based contraceptive method that uses dates of menstruation and basal body temperature to inform couples whether protected intercourse is needed to prevent pregnancies. Our purpose with this study is to investigate the contraceptive efficacy of the mobile application by evaluating the perfect- and typical-use Pearl Index. In this prospective observational study, 22,785 users of the application logged a total of 18,548 woman-years of data into the application. We used these data to calculate typical- and perfect-use Pearl Indexes, as well as 13-cycle pregnancy rates using life-table analysis. We found a typical-use Pearl Index of 6.9 pregnancies per 100 woman-years [95% confidence interval (CI): 6.5-7.2], corrected to 6.8 (95% CI: 6.4-7.2) when truncating users after 12months. We estimated a 13-cycle typical-use failure rate of 8.3% (95% CI: 7.8-8.9). We found that the perfect-use Pearl Index was 1.0 pregnancy per 100 woman-years (95% CI: 0.5-1.5). Finally, we estimated that the rate of pregnancies from cycles where the application erroneously flagged a fertile day as infertile was 0.5 (95% CI: 0.4-0.7) per 100 woman-years. We estimated a discontinuation rate over 12months of 54%. This study shows that the efficacy of a contraceptive mobile application is higher than usually reported for traditional fertility awareness-based methods. The application may contribute to reducing the unmet need for contraception. The measured typical- and perfect-use efficacies of the mobile application Natural Cycles are important parameters for women considering their contraceptive options as well as for the clinicians advising them. The large available data set in this paper allows for future studies on acceptability, for example, by studying the efficacy for different cohorts and geographic regions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  15. Analysis of typical meteorological years in different climates of China

    International Nuclear Information System (INIS)

    Yang, Liu; Lam, Joseph C.; Liu, Jiaping

    2007-01-01

    Typical meteorological years (TMYs) for 60 cities in the five major climatic zones (severe cold, cold, hot summer and cold winter, hot summer and warm winter, mild) in China were investigated. Long term (1971-2000) measured weather data such as dry bulb and dew point temperatures, wind speed and global solar radiation were gathered and analysed. A total of seven climatic indices were used to select the 12 typical meteorological months (TMMs) that made up the TMY for each city. In general, the cumulative distribution functions of the TMMs selected tended to follow their long term counterparts quite well. There was no persistent trend in any particular years being more representative than the others, though 1978 and 1982 tended to be picked most often. This paper presents the work and its findings. Future work on the assessment of TMYs in building energy simulation is also discussed

  16. Testing electric field models using ring current ion energy spectra from the Equator-S ion composition (ESIC instrument

    Directory of Open Access Journals (Sweden)

    L. M. Kistler

    Full Text Available During the main and early recovery phase of a geomagnetic storm on February 18, 1998, the Equator-S ion composition instrument (ESIC observed spectral features which typically represent the differences in loss along the drift path in the energy range (5–15 keV/e where the drift changes from being E × B dominated to being gradient and curvature drift dominated. We compare the expected energy spectra modeled using a Volland-Stern electric field and a Weimer electric field, assuming charge exchange along the drift path, with the observed energy spectra for H+ and O+. We find that using the Weimer electric field gives much better agreement with the spectral features, and with the observed losses. Neither model, however, accurately predicts the energies of the observed minima.

    Key words. Magnetospheric physics (energetic particles trapped; plasma convection; storms and substorms

  17. Nonlinear modeling of magnetorheological energy absorbers under impact conditions

    International Nuclear Information System (INIS)

    Mao, Min; Hu, Wei; Choi, Young-Tai; Wereley, Norman M; Browne, Alan L; Ulicny, John; Johnson, Nancy

    2013-01-01

    Magnetorheological energy absorbers (MREAs) provide adaptive vibration and shock mitigation capabilities to accommodate varying payloads, vibration spectra, and shock pulses, as well as other environmental factors. A key performance metric is the dynamic range, which is defined as the ratio of the force at maximum field to the force in the absence of field. The off-state force is typically assumed to increase linearly with speed, but at the higher shaft speeds occurring in impact events, the off-state damping exhibits nonlinear velocity squared damping effects. To improve understanding of MREA behavior under high-speed impact conditions, this study focuses on nonlinear MREA models that can more accurately predict MREA dynamic behavior for nominal impact speeds of up to 6 m s −1 . Three models were examined in this study. First, a nonlinear Bingham-plastic (BP) model incorporating Darcy friction and fluid inertia (Unsteady-BP) was formulated where the force is proportional to the velocity. Second, a Bingham-plastic model incorporating minor loss factors and fluid inertia (Unsteady-BPM) to better account for high-speed behavior was formulated. Third, a hydromechanical (HM) analysis was developed to account for fluid compressibility and inertia as well as minor loss factors. These models were validated using drop test data obtained using the drop tower facility at GM R and D Center for nominal drop speeds of up to 6 m s −1 . (paper)

  18. Longitudinal changes in cortical thickness in autism and typical development.

    Science.gov (United States)

    Zielinski, Brandon A; Prigge, Molly B D; Nielsen, Jared A; Froehlich, Alyson L; Abildskov, Tracy J; Anderson, Jeffrey S; Fletcher, P Thomas; Zygmunt, Kristen M; Travers, Brittany G; Lange, Nicholas; Alexander, Andrew L; Bigler, Erin D; Lainhart, Janet E

    2014-06-01

    The natural history of brain growth in autism spectrum disorders remains unclear. Cross-sectional studies have identified regional abnormalities in brain volume and cortical thickness in autism, although substantial discrepancies have been reported. Preliminary longitudinal studies using two time points and small samples have identified specific regional differences in cortical thickness in the disorder. To clarify age-related trajectories of cortical development, we examined longitudinal changes in cortical thickness within a large mixed cross-sectional and longitudinal sample of autistic subjects and age- and gender-matched typically developing controls. Three hundred and forty-five magnetic resonance imaging scans were examined from 97 males with autism (mean age = 16.8 years; range 3-36 years) and 60 males with typical development (mean age = 18 years; range 4-39 years), with an average interscan interval of 2.6 years. FreeSurfer image analysis software was used to parcellate the cortex into 34 regions of interest per hemisphere and to calculate mean cortical thickness for each region. Longitudinal linear mixed effects models were used to further characterize these findings and identify regions with between-group differences in longitudinal age-related trajectories. Using mean age at time of first scan as a reference (15 years), differences were observed in bilateral inferior frontal gyrus, pars opercularis and pars triangularis, right caudal middle frontal and left rostral middle frontal regions, and left frontal pole. However, group differences in cortical thickness varied by developmental stage, and were influenced by IQ. Differences in age-related trajectories emerged in bilateral parietal and occipital regions (postcentral gyrus, cuneus, lingual gyrus, pericalcarine cortex), left frontal regions (pars opercularis, rostral middle frontal and frontal pole), left supramarginal gyrus, and right transverse temporal gyrus, superior parietal lobule, and

  19. The emergence of typical entanglement in two-party random processes

    International Nuclear Information System (INIS)

    Dahlsten, O C O; Oliveira, R; Plenio, M B

    2007-01-01

    We investigate the entanglement within a system undergoing a random, local process. We find that there is initially a phase of very fast generation and spread of entanglement. At the end of this phase the entanglement is typically maximal. In Oliveira et al (2007 Phys. Rev. Lett. 98 130502) we proved that the maximal entanglement is reached to a fixed arbitrary accuracy within O(N 3 ) steps, where N is the total number of qubits. Here we provide a detailed and more pedagogical proof. We demonstrate that one can use the so-called stabilizer gates to simulate this process efficiently on a classical computer. Furthermore, we discuss three ways of identifying the transition from the phase of rapid spread of entanglement to the stationary phase: (i) the time when saturation of the maximal entanglement is achieved, (ii) the cutoff moment, when the entanglement probability distribution is practically stationary, and (iii) the moment block entanglement exhibits volume scaling. We furthermore investigate the mixed state and multipartite setting. Numerically, we find that the mutual information appears to behave similarly to the quantum correlations and that there is a well-behaved phase-space flow of entanglement properties towards an equilibrium. We describe how the emergence of typical entanglement can be used to create a much simpler tripartite entanglement description. The results form a bridge between certain abstract results concerning typical (also known as generic) entanglement relative to an unbiased distribution on pure states and the more physical picture of distributions emerging from random local interactions

  20. Review: typically-developing students' views and experiences of inclusive education.

    Science.gov (United States)

    Bates, Helen; McCafferty, Aileen; Quayle, Ethel; McKenzie, Karen

    2015-01-01

    The present review aimed to summarize and critique existing qualitative studies that have examined typically-developing students' views of inclusive education (i.e. the policy of teaching students with special educational needs in mainstream settings). Guidelines from the Centre for Reviews and Dissemination were followed, outlining the criteria by which journal articles were identified and critically appraised. Narrative Synthesis was used to summarize findings across studies. Fourteen studies met the review's inclusion criteria and were subjected to quality assessment. Analysis revealed that studies were of variable quality: three were of "good" methodological quality, seven of "medium" quality, and four of "poor" quality. With respect to findings, three overarching themes emerged: students expressed mostly negative attitudes towards peers with disabilities; were confused by the principles and practices of inclusive education; and made a number of recommendations for improving its future provision. A vital determinant of the success of inclusive education is the extent to which it is embraced by typically-developing students. Of concern, this review highlights that students tend not to understand inclusive education, and that this can breed hostility towards it. More qualitative research of high methodological quality is needed in this area. Implications for Rehabilitation Typically-developing students are key to the successful implementation of inclusive education. This review shows that most tend not to understand it, and can react by engaging in avoidance and/or targeted bullying of peers who receive additional support. Schools urgently need to provide teaching about inclusive education, and increase opportunities for contact between students who do and do not receive support (e.g. cooperative learning).

  1. A Perishable Inventory Model with Return

    Science.gov (United States)

    Setiawan, S. W.; Lesmono, D.; Limansyah, T.

    2018-04-01

    In this paper, we develop a mathematical model for a perishable inventory with return by assuming deterministic demand and inventory dependent demand. By inventory dependent demand, it means that demand at certain time depends on the available inventory at that time with certain rate. In dealing with perishable items, we should consider deteriorating rate factor that corresponds to the decreasing quality of goods. There are also costs involved in this model such as purchasing, ordering, holding, shortage (backordering) and returning costs. These costs compose the total costs in the model that we want to minimize. In the model we seek for the optimal return time and order quantity. We assume that after some period of time, called return time, perishable items can be returned to the supplier at some returning costs. The supplier will then replace them in the next delivery. Some numerical experiments are given to illustrate our model and sensitivity analysis is performed as well. We found that as the deteriorating rate increases, returning time becomes shorter, the optimal order quantity and total cost increases. When considering the inventory-dependent demand factor, we found that as this factor increases, assuming a certain deteriorating rate, returning time becomes shorter, optimal order quantity becomes larger and the total cost increases.

  2. Probabilistic Modelling of Information Propagation in Wireless Mobile Ad-Hoc Network

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Hansen, Martin Bøgsted; Schwefel, Hans-Peter

    2005-01-01

    In this paper the dynamics of broadcasting wireless ad-hoc networks is studied through probabilistic modelling. A randomized transmission discipline is assumed in accordance with existing MAC definitions such as WLAN with Decentralized Coordination or IEEE-802.15.4. Message reception is assumed...... to be governed by node power-down policies and is equivalently assumed to be randomized. Altogether randomization facilitates a probabilistic model in the shape of an integro-differential equation governing the propagation of information, where brownian node mobility may be accounted for by including an extra...... diffusion term. The established model is analyzed for transient behaviour and a travelling wave solution facilitates expressions for propagation speed as well as parametrized analysis of network reliability and node power consumption. Applications of the developed models for node localization and network...

  3. Genetic and environmental influences on female sexual orientation, childhood gender typicality and adult gender identity.

    Directory of Open Access Journals (Sweden)

    Andrea Burri

    Full Text Available BACKGROUND: Human sexual orientation is influenced by genetic and non-shared environmental factors as are two important psychological correlates--childhood gender typicality (CGT and adult gender identity (AGI. However, researchers have been unable to resolve the genetic and non-genetic components that contribute to the covariation between these traits, particularly in women. METHODOLOGY/PRINCIPAL FINDINGS: Here we performed a multivariate genetic analysis in a large sample of British female twins (N = 4,426 who completed a questionnaire assessing sexual attraction, CGT and AGI. Univariate genetic models indicated modest genetic influences on sexual attraction (25%, AGI (11% and CGT (31%. For the multivariate analyses, a common pathway model best fitted the data. CONCLUSIONS/SIGNIFICANCE: This indicated that a single latent variable influenced by a genetic component and common non-shared environmental component explained the association between the three traits but there was substantial measurement error. These findings highlight common developmental factors affecting differences in sexual orientation.

  4. Longitudinal adaptation in language development: a study of typically-developing children and children with ASD

    DEFF Research Database (Denmark)

    Weed, Ethan; Fusaroli, Riccardo; Fein, Deborah

    Background: Children with Autism Spectrum Disorder (ASD) often display distinctive language development trajectories (Tek et al., 2013). Because language-learning is a social endeavor, these trajectories could be partially grounded in the dynamics that characterize the children's social and lingu......Background: Children with Autism Spectrum Disorder (ASD) often display distinctive language development trajectories (Tek et al., 2013). Because language-learning is a social endeavor, these trajectories could be partially grounded in the dynamics that characterize the children's social......’s previous behavior. In this study, we tested this model of mutual influence in a longitudinal corpus (6 visits over 2 years), consisting of 30 minutes of controlled playful activities between parents and 66 children (33 ASD and 33 matched typically developing (TD), Goodwin et al. 2012). Methods: We first.......Results:Developmental trajectories: Our models described the developmental trajectories (0.3 ASD (β: from -1.14 to -0.86), with an interaction between the two (children with ASD showed shallower trajectories, β: -2.43 to -1...

  5. Effects of snow grain shape on climate simulations: sensitivity tests with the Norwegian Earth System Model

    Directory of Open Access Journals (Sweden)

    P. Räisänen

    2017-12-01

    Full Text Available Snow consists of non-spherical grains of various shapes and sizes. Still, in radiative transfer calculations, snow grains are often treated as spherical. This also applies to the computation of snow albedo in the Snow, Ice, and Aerosol Radiation (SNICAR model and in the Los Alamos sea ice model, version 4 (CICE4, both of which are employed in the Community Earth System Model and in the Norwegian Earth System Model (NorESM. In this study, we evaluate the effect of snow grain shape on climate simulated by NorESM in a slab ocean configuration of the model. An experiment with spherical snow grains (SPH is compared with another (NONSPH in which the snow shortwave single-scattering properties are based on a combination of three non-spherical snow grain shapes optimized using measurements of angular scattering by blowing snow. The key difference between these treatments is that the asymmetry parameter is smaller in the non-spherical case (0.77–0.78 in the visible region than in the spherical case ( ≈  0.89. Therefore, for the same effective snow grain size (or equivalently, the same specific projected area, the snow broadband albedo is higher when assuming non-spherical rather than spherical snow grains, typically by 0.02–0.03. Considering the spherical case as the baseline, this results in an instantaneous negative change in net shortwave radiation with a global-mean top-of-the-model value of ca. −0.22 W m−2. Although this global-mean radiative effect is rather modest, the impacts on the climate simulated by NorESM are substantial. The global annual-mean 2 m air temperature in NONSPH is 1.17 K lower than in SPH, with substantially larger differences at high latitudes. The climatic response is amplified by strong snow and sea ice feedbacks. It is further demonstrated that the effect of snow grain shape could be largely offset by adjusting the snow grain size. When assuming non-spherical snow grains with the parameterized grain

  6. Suicide ideation and attempts in children with psychiatric disorders and typical development.

    Science.gov (United States)

    Dickerson Mayes, Susan; Calhoun, Susan L; Baweja, Raman; Mahr, Fauzia

    2015-01-01

    Children and adolescents with psychiatric disorders are at increased risk for suicide behavior. This is the first study to compare frequencies of suicide ideation and attempts in children and adolescents with specific psychiatric disorders and typical children while controlling for comorbidity and demographics. Mothers rated the frequency of suicide ideation and attempts in 1,706 children and adolescents with psychiatric disorders and typical development, 6-18 years of age. For the typical group, 0.5% had suicide behavior (ideation or attempts), versus 24% across the psychiatric groups (bulimia 48%, depression or anxiety disorder 34%, oppositional defiant disorder 33%, ADHD-combined type 22%, anorexia 22%, autism 18%, intellectual disability 17%, and ADHD-inattentive type 8%). Most alarming, 29% of adolescents with bulimia often or very often had suicide attempts, compared with 0-4% of patients in the other psychiatric groups. It is important for professionals to routinely screen all children and adolescents who have psychiatric disorders for suicide ideation and attempts and to treat the underlying psychiatric disorders that increase suicide risk.

  7. What is the appropriate counterfactual when estimating effects of multilateral trade policy reform?

    DEFF Research Database (Denmark)

    Anderson, Kym; Jensen, Hans Grinsted; Nelgen, Signe

    2016-01-01

    the counterfactual price distortions in 2030 are shown to be much larger in the case where agricultural protection grows endogenously than in the case assuming no policy changes over the projection period. This suggests the traditional way of estimating effects of a multilateral agricultural trade agreement may...... of the DDA’s possible effects thus requires first modelling the world economy to 2030 and, in that process, projecting what trade-related policies might be by then without a DDA. Typically, modelers assume the counterfactual policy regime to be a ‘business-as-usual’ projection assuming the status quo. Yet we...... by projecting the world economy to 2030 using the Global Trade Analysis Project (GTAP) model with those two alternative policy regimes and then simulating a move to global free trade (the maximum benefit from a multilateral trade reform) in each of those two cases. The welfare effects of removing...

  8. Modeling and Performance of Bonus-Malus Systems: Stationarity versus Age-Correction

    Directory of Open Access Journals (Sweden)

    Søren Asmussen

    2014-03-01

    Full Text Available In a bonus-malus system in car insurance, the bonus class of a customer is updated from one year to the next as a function of the current class and the number of claims in the year (assumed Poisson. Thus the sequence of classes of a customer in consecutive years forms a Markov chain, and most of the literature measures performance of the system in terms of the stationary characteristics of this Markov chain. However, the rate of convergence to stationarity may be slow in comparison to the typical sojourn time of a customer in the portfolio. We suggest an age-correction to the stationary distribution and present an extensive numerical study of its effects. An important feature of the modeling is a Bayesian view, where the Poisson rate according to which claims are generated for a customer is the outcome of a random variable specific to the customer.

  9. Multilayer Stochastic Block Models Reveal the Multilayer Structure of Complex Networks

    Directory of Open Access Journals (Sweden)

    Toni Vallès-Català

    2016-03-01

    Full Text Available In complex systems, the network of interactions we observe between systems components is the aggregate of the interactions that occur through different mechanisms or layers. Recent studies reveal that the existence of multiple interaction layers can have a dramatic impact in the dynamical processes occurring on these systems. However, these studies assume that the interactions between systems components in each one of the layers are known, while typically for real-world systems we do not have that information. Here, we address the issue of uncovering the different interaction layers from aggregate data by introducing multilayer stochastic block models (SBMs, a generalization of single-layer SBMs that considers different mechanisms of layer aggregation. First, we find the complete probabilistic solution to the problem of finding the optimal multilayer SBM for a given aggregate-observed network. Because this solution is computationally intractable, we propose an approximation that enables us to verify that multilayer SBMs are more predictive of network structure in real-world complex systems.

  10. Modelling natural convection in a heated vertical channel for room ventilation

    International Nuclear Information System (INIS)

    Rodrigues, A.M.; Canha da Piedade, A.; Lahellec, A.; Grandpeix, J.Y.

    2000-01-01

    Solar-air collectors installed on the south-facing walls of school buildings have been tried out in Portugal as a passive means of improving indoor air quality without prejudice to thermal comfort requirements. A numerical investigation of the behaviour of these systems, typified as vertical channels opened at both ends, is presented for typical geometries and outdoor conditions. The study is carried out with natural convection and assumes that the induced flow is turbulent and two-dimensional. The fully averaged equations of motion and energy, added to a two-equation turbulence model, are discretized and solved following the concepts of TEF (Transfer Evolution Formalism) using a finite volume method. Flow and temperature fields are produced and results presented in terms of temperature and velocity distributions at the exit section of the duct. These enable a better understanding of the developing flow and can be helpful in the design phase of this type of system. (author)

  11. The transition model test for serial dependence in mixed-effects models for binary data

    DEFF Research Database (Denmark)

    Breinegaard, Nina; Rabe-Hesketh, Sophia; Skrondal, Anders

    2017-01-01

    Generalized linear mixed models for longitudinal data assume that responses at different occasions are conditionally independent, given the random effects and covariates. Although this assumption is pivotal for consistent estimation, violation due to serial dependence is hard to assess by model...

  12. A dynamic birth-death model via Intrinsic Linkage

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2013-05-01

    Full Text Available BACKGROUND Dynamic population models, or models with changing vital rates, are only beginning to receive serious attention from mathematical demographers. Despite considerable progress, there is still no general analytical solution for the size or composition of a population generated by an arbitrary sequence of vital rates. OBJECTIVE The paper introduces a new approach, Intrinsic Linkage, that in many cases can analytically determine the birth trajectory of a dynamic birth-death population. METHODS Intrinsic Linkage assumes a weighted linear relationship between (i the time trajectory of proportional increases in births in a population and (ii the trajectory of the intrinsic rates of growth of the projection matrices that move the population forward in time. Flexibility is provided through choice of the weighting parameter, w, that links these two trajectories. RESULTS New relationships are found linking implied intrinsic and observed population patterns of growth. Past experience is "forgotten" through a process of simple exponential decay. When the intrinsic growth rate trajectory follows a polynomial, exponential, or cyclical pattern, the population birth trajectory can be expressed analytically in closed form. Numerical illustrations provide population values and relationships in metastable and cyclically stable models. Plausible projection matrices are typically found for a broad range of values of w, although w appears to vary greatly over time in actual populations. CONCLUSIONS The Intrinsic Linkage approach extends current techniques for dynamic modeling, revealing new relationships between population structures and the changing vital rates that generate them.

  13. The modelling of countermeasures in COSYMA

    International Nuclear Information System (INIS)

    Burkart, K.; Hasemann, I.

    1991-01-01

    The modelling of countermeasures in COSYMA has been extended in comparison to UFOMOD and MARC in order to allow the user considerable freedom in specifying a wide range of emergency actions and criteria at which these actions will be assumed to be imposed and withdrawn, so that most of the recommendations and criteria adopted in different countries inside and outside the EC can be modelled. Countermeasures are implemented with the aim of reducing either acute exposure during and shortly after the accident or continuing and long-term exposure due to deposited or incorporated radionuclides. Different countermeasures may be necessary in different areas or even in the same area (e.g. sheltering followed by evacuation). Therefore not just several types of individual protective actions but different combinations thereof are modelled in COSYMA. Each of the individual protective actions may be assumed to have a large variety of possible features characterized by parameters with user-defined values. Some of the countermeasures can be assumed to be initiated automatically in a certain area; others are defined on the basis of dose criteria

  14. A study on prioritizing typical women’s entrepreneur characteristics

    Directory of Open Access Journals (Sweden)

    Ebrahim Ramezani

    2014-07-01

    Full Text Available Entrepreneurship is one of the main pivot of progress and growth of every country. The spread of entrepreneurship particularly the role of women in this category has speeded up today more than any other times. Many of researchers believe that attention to women entrepreneurship plays remarkable role in soundness and safety of nation’s economy. Maybe in Iran less attention has been paid to this matter in proportion to other countries and due to various reasons, there are not many entrepreneur woman. However, employing typical entrepreneur women in various fields of productivity, industrial, commercial, social and cultural and even higher than these, in country’s political issue proves that women’s role is magnificent and in many cases they enjoy higher abilities in portion to men. In this paper, using additive ratio assessment (ARAS as a prioritizing method, eleven entrepreneur women were chosen for prioritizing criteria for measuring a typical women’s entrepreneurship characteristics. The results show that the balance between work and family among criteria are propounded as the highest weight and fulfilling different jobs simultaneously as the lowest weight.

  15. Successes and failures of the constituent quark model

    International Nuclear Information System (INIS)

    Lipkin, H.J.

    1982-01-01

    Our approach considers the model as a possible bridge between QCD and the experimental data and examines its predictions to see where these succeed and where they fail. We also attempt to improve the model by looking for additional simple assumptions which give better fits to the experimental data. But we avoid complicated models with too many ad hoc assumptions and too many free parameters; these can fit everything but teach us nothing. We define our constituent quark model by analogy with the constituent electron model of the atom and the constituent nucleon model of the nucleus. In the same way that an atom is assumed to consist only of constituent electrons and a central Coulomb field and a nucleus is assumed to consist only of constituent nucleons hadrons are assumed to consist only of their constituent valence quarks with no bag, no glue, no ocean, nor other constituents. Although these constituent models are oversimplified and neglect other constituents we push them as far as we can. Atomic physics has photons and vacuum polarization as well as constituent electrons, but the constituent model is adequate for calculating most features of the spectrum when finer details like the Lamb shift are neglected. 54 references

  16. Heat transport modelling in EXTRAP T2R

    Science.gov (United States)

    Frassinetti, L.; Brunsell, P. R.; Cecconello, M.; Drake, J. R.

    2009-02-01

    A model to estimate the heat transport in the EXTRAP T2R reversed field pinch (RFP) is described. The model, based on experimental and theoretical results, divides the RFP electron heat diffusivity χe into three regions, one in the plasma core, where χe is assumed to be determined by the tearing modes, one located around the reversal radius, where χe is assumed not dependent on the magnetic fluctuations and one in the extreme edge, where high χe is assumed. The absolute values of the core and of the reversal χe are determined by simulating the electron temperature and the soft x-ray and by comparing the simulated signals with the experimental ones. The model is used to estimate the heat diffusivity and the energy confinement time during the flat top of standard plasmas, of deep F plasmas and of plasmas obtained with the intelligent shell.

  17. Receptor imaging of schizophrenic patients under treatment with typical and atypical neuroleptics

    International Nuclear Information System (INIS)

    Dresel, S.; Tatsch, K.; Meisenzahl, E.; Scherer, J.

    2002-01-01

    Schizophrenic psychosis is typically treated by typical and atypical neuroleptics. Both groups of drugs differ with regard to induction of extrapyramidal side effects. The occupancy of postsynaptic dopaminergic D2 receptors is considered to be an essential aspect of their antipsychotic properties. The dopamine D2 receptor status can be assessed by means of [I-123]IBZM SPECT. Studies on the typical neuroleptic haloperidol revealed an exponential dose response relationship measured by IBZM. Extrapyramidal side effects were presented by all patients below a threshold of the specific binding of IBZM below 0.4 (with one exception, norm value: >0.95). Also under treatment with the atypical neuroleptic clozapine an exponential dose response relationship was found. However, none of these patients showed extrapyramidal side effects. Recently introduced, new atypical neuroleptics such as risperidone and olanzapine again presented with an exponential relationship between daily dose and IBZM binding. The curves of the latter were in between the curves of haloperidol and clozapine. Extrapyramidal side effects were documented in a less number of patients treated with risperidone as compared to haloperidol, for olanzapine only one patient revealed these findings in our own patient group. The pharmacological profile of atypical neuroleptics shows - in addition to their binding to dopamine receptors - also high affinities to the receptors of other neurotransmitter systems, particularly the serotonergic system. Therefore, the lower incidence of extrapyramidal side effects seen by atypical in comparison to typical neuroleptics is at least in part most likely due to a complex interaction on a variety of neurotransmitter systems. (orig.) [de

  18. Jump Model / Comparability Ratio Model — Joinpoint Help System 4.4.0.0

    Science.gov (United States)

    The Jump Model / Comparability Ratio Model in the Joinpoint software provides a direct estimation of trend data (e.g. cancer rates) where there is a systematic scale change, which causes a “jump” in the rates, but is assumed not to affect the underlying trend.

  19. Model for the sulfidation of calcined limestone and its use in reactor models.

    NARCIS (Netherlands)

    Heesink, Albertus B.M.; Brilman, Derk Willem Frederik; van Swaaij, Willibrordus Petrus Maria

    1998-01-01

    A mathematical model describing the sulfidation of a single calcined limestone particle was developed and experimentally verified. This model, which includes no fitting parameters, assumes a calcined limestone particle to consist of spherical grains of various sizes that react with H2S according to

  20. Typicality aids search for an unspecified target, but only in identification and not in attentional guidance.

    Science.gov (United States)

    Castelhano, Monica S; Pollatsek, Alexander; Cave, Kyle R

    2008-08-01

    Participants searched for a picture of an object, and the object was either a typical or an atypical category member. The object was cued by either the picture or its basic-level category name. Of greatest interest was whether it would be easier to search for typical objects than to search for atypical objects. The answer was"yes," but only in a qualified sense: There was a large typicality effect on response time only for name cues, and almost none of the effect was found in the time to locate (i.e., first fixate) the target. Instead, typicality influenced verification time-the time to respond to the target once it was fixated. Typicality is thus apparently irrelevant when the target is well specified by a picture cue; even when the target is underspecified (as with a name cue), it does not aid attentional guidance, but only facilitates categorization.

  1. Health-Related Quality of Life in Children Attending Special and Typical Education Greek Schools

    Science.gov (United States)

    Papadopoulou, D.; Malliou, P.; Kofotolis, N.; Vlachopoulos, S. P.; Kellis, E.

    2017-01-01

    The purpose of this study was to examine parental perceptions about Health Related Quality of Life (HRQoL) of typical education and special education students in Greece. The Pediatric Quality of Life Inventory (PedsQL) was administered to the parents of 251 children from typical schools, 46 students attending integration classes (IC) within a…

  2. Consequences of kriging and land use regression for PM2.5 predictions in epidemiologic analyses: insights into spatial variability using high-resolution satellite data.

    Science.gov (United States)

    Alexeeff, Stacey E; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A

    2015-01-01

    Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1 km × 1 km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R(2) yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with >0.9 out-of-sample R(2) yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the SEs. Land use regression models performed better in chronic effect simulations. These results can help researchers when interpreting health effect estimates in these types of studies.

  3. Adolescent alcohol exposure and persistence of adolescent-typical phenotypes into adulthood: a mini-review

    Science.gov (United States)

    Spear, Linda Patia; Swartzwelder, H. Scott

    2014-01-01

    Alcohol use is typically initiated during adolescence, which, along with young adulthood, is a vulnerable period for the onset of high-risk drinking and alcohol abuse. Given across-species commonalities in certain fundamental neurobehavioral characteristics of adolescence, studies in laboratory animals such as the rat have proved useful to assess persisting consequences of repeated alcohol exposure. Despite limited research to date, reports of long-lasting effects of adolescent ethanol exposure are emerging, along with certain common themes. One repeated finding is that adolescent exposure to ethanol sometimes results in the persistence of adolescent-typical phenotypes into adulthood. Instances of adolescent -like persistence have been seen in terms of baseline behavioral, cognitive, electrophysiological and neuroanatomical characteristics, along with the retention of adolescent-typical sensitivities to acute ethanol challenge. These effects are generally not observed after comparable ethanol exposure in adulthood. Persistence of adolescent-typical phenotypes is not always evident, and may be related to regionally-specific ethanol influences on the interplay between CNS excitation and inhibition critical for the timing of neuroplasticity. PMID:24813805

  4. Would Outsourcing Increase or Decrease Wage Inequality? Two Models, Two Answers

    OpenAIRE

    Wenli Cheng; Dingsheng Zhang

    2005-01-01

    This paper develops two models to study the impact of outsourcing on wage inequality between skilled and unskilled labor in the developed country and the developing country. The first model assumes symmetric production technologies in both countries, and predicts that outsourcing will increase wage inequality in the developed country, but decrease wage inequality in the developing country. The second model assumes asymmetric technologies in the production of the intermediate good and predicts...

  5. Profitability of labour factor in the typical dairy farms in the world

    Directory of Open Access Journals (Sweden)

    Andrzej Parzonko

    2009-01-01

    Full Text Available The main purpose of the article was to analyse the productivity and profitability of labour factor and to present asset endowments of the typical dairy farms distinguished within IFCN (International Farm Comparison Network. Among analysed 103 typical dairy farms from 34 countries, the highest net dairy farm profit characterised large farms from USA, Australia and New Zealand. Those farms generated also significantly higher profit per working hour then the potential wages that could be earned outside the farm. The highest assets value per 100 kg of produced milk characterised European farms (especially with low production scale.

  6. The origin of extended disc galaxies at z=2

    NARCIS (Netherlands)

    Sales, Laura V.; Navarro, Julio F.; Schaye, Joop; Dalla Vecchia, Claudio; Springel, Volker; Haas, Marcel R.; Helmi, Amina

    2009-01-01

    Galaxy formation models typically assume that the size and rotation speed of galaxy discs are largely dictated by the mass, concentration and spin of their surrounding dark matter haloes. Equally important, however, is the fraction of baryons in the halo that collect into the central galaxy, as well

  7. On the formation of dense understory layers in forests worldwide: consequences and implications for forest dynamics, biodiversity, and succession

    Science.gov (United States)

    Alejandro A. Royo; Walter P. Carson

    2006-01-01

    The mechanistic basis underpinning forest succession is the gap-phase paradigm in which overstory disturbance interacts with seedling and sapling shade tolerance to determine successional trajectories. The theory, and ensuing simulation models, typically assume that understory plants have little impact on the advance regeneration layer's composition. We challenge...

  8. A hybrid plume model for local-scale dispersion

    Energy Technology Data Exchange (ETDEWEB)

    Nikmo, J.; Tuovinen, J.P.; Kukkonen, J.; Valkama, I.

    1997-12-31

    The report describes the contribution of the Finnish Meteorological Institute to the project `Dispersion from Strongly Buoyant Sources`, under the `Environment` programme of the European Union. The project addresses the atmospheric dispersion of gases and particles emitted from typical fires in warehouses and chemical stores. In the study only the `passive plume` regime, in which the influence of plume buoyancy is no longer important, is addressed. The mathematical model developed and its numerical testing is discussed. The model is based on atmospheric boundary-layer scaling theory. In the vicinity of the source, Gaussian equations are used in both the horizontal and vertical directions. After a specified transition distance, gradient transfer theory is applied in the vertical direction, while the horizontal dispersion is still assumed to be Gaussian. The dispersion parameters and eddy diffusivity are modelled in a form which facilitates the use of a meteorological pre-processor. Also a new model for the vertical eddy diffusivity (K{sub z}), which is a continuous function of height in the various atmospheric scaling regions is presented. The model includes a treatment of the dry deposition of gases and particulate matter, but wet deposition has been neglected. A numerical solver for the atmospheric diffusion equation (ADE) has been developed. The accuracy of the numerical model was analysed by comparing the model predictions with two analytical solutions of ADE. The numerical deviations of the model predictions from these analytic solutions were less than two per cent for the computational regime. The report gives numerical results for the vertical profiles of the eddy diffusivity and the dispersion parameters, and shows spatial concentration distributions in various atmospheric conditions 39 refs.

  9. Memory for radio advertisements: the effect of program and typicality.

    Science.gov (United States)

    Martín-Luengo, Beatriz; Luna, Karlos; Migueles, Malen

    2013-01-01

    We examined the influence of the type of radio program on the memory for radio advertisements. We also investigated the role in memory of the typicality (high or low) of the elements of the products advertised. Participants listened to three types of programs (interesting, boring, enjoyable) with two advertisements embedded in each. After completing a filler task, the participants performed a true/false recognition test. Hits and false alarm rates were higher for the interesting and enjoyable programs than for the boring one. There were also more hits and false alarms for the high-typicality elements. The response criterion for the advertisements embedded in the boring program was stricter than for the advertisements in other types of programs. We conclude that the type of program in which an advertisement is inserted and the nature of the elements of the advertisement affect both the number of hits and false alarms and the response criterion, but not the accuracy of the memory.

  10. Daily intakes of naturally occurring radioisotopes in typical Korean foods

    International Nuclear Information System (INIS)

    Choi, Min-Seok; Lin Xiujing; Lee, Sun Ah; Kim, Wan; Kang, Hee-Dong; Doh, Sih-Hong; Kim, Do-Sung; Lee, Dong-Myung

    2008-01-01

    The concentrations of naturally occurring radioisotopes ( 232 Th, 228 Th, 230 Th, 228 Ra, 226 Ra, and 40 K) in typical Korean foods were evaluated. The daily intakes of these radioisotopes were calculated by comparing concentrations in typical Korean foods and the daily consumption rates of these foods. Daily intakes were as follows: 232 Th, 0.00-0.23; 228 Th, 0.00-2.04; 230 Th, 0.00-0.26; 228 Ra, 0.02-2.73; 226 Ra, 0.01-4.37 mBq/day; and 40 K, 0.01-5.71 Bq/day. The total daily intake of the naturally occurring radioisotopes measured in this study from food was 39.46 Bq/day. The total annual internal dose resulting from ingestion of radioisotopes in food was 109.83 μSv/y, and the radioisotope with the highest daily intake was 40 K. These values were same level compiled in other countries

  11. Verbal communication skills in typical language development: a case series.

    Science.gov (United States)

    Abe, Camila Mayumi; Bretanha, Andreza Carolina; Bozza, Amanda; Ferraro, Gyovanna Junya Klinke; Lopes-Herrera, Simone Aparecida

    2013-01-01

    The aim of the current study was to investigate verbal communication skills in children with typical language development and ages between 6 and 8 years. Participants were 10 children of both genders in this age range without language alterations. A 30-minute video of each child's interaction with an adult (father and/or mother) was recorded, fully transcribed, and analyzed by two trained researchers in order to determine reliability. The recordings were analyzed according to a protocol that categorizes verbal communicative abilities, including dialogic, regulatory, narrative-discursive, and non-interactive skills. The frequency of use of each category of verbal communicative ability was analyzed (in percentage) for each subject. All subjects used more dialogical and regulatory skills, followed by narrative-discursive and non-interactive skills. This suggests that children in this age range are committed to continue dialog, which shows that children with typical language development have more dialogic interactions during spontaneous interactions with a familiar adult.

  12. Clinical correlates of parenting stress in children with Tourette syndrome and in typically developing children.

    Science.gov (United States)

    Stewart, Stephanie B; Greene, Deanna J; Lessov-Schlaggar, Christina N; Church, Jessica A; Schlaggar, Bradley L

    2015-05-01

    To determine the impact of tic severity in children with Tourette syndrome on parenting stress and the impact of comorbid attention-deficit hyperactivity disorder (ADHD) and obsessive-compulsive disorder (OCD) symptomatology on parenting stress in both children with Tourette syndrome and typically developing children. Children with diagnosed Tourette syndrome (n=74) and tic-free typically developing control subjects (n=48) were enrolled in a cross-sectional study. Parenting stress was greater in the group with Tourette syndrome than the typically developing group. Increased levels of parenting stress were related to increased ADHD symptomatology in both children with Tourette syndrome and typically developing children. Symptomatology of OCD was correlated with parenting stress in Tourette syndrome. Parenting stress was independent of tic severity in patients with Tourette syndrome. For parents of children with Tourette syndrome, parenting stress appears to be related to the child's ADHD and OCD comorbidity and not to the severity of the child's tic. Subthreshold ADHD symptomatology also appears to be related to parenting stress in parents of typically developing children. These findings demonstrate that ADHD symptomatology impacts parental stress both in children with and without a chronic tic disorder. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    agricultural produce, as well as estimates of whole body concentrations. The observed data for the quantities of interest were typically summarised in the form of a 95% confidence interval for the mean, where the underlying distribution of the observations was assumed to be log-normal. Within the VAMP project, two sets of model predictions were provided, but in this work, only the final predictions have been used. The model predictions have not been assumed to have log-normal distributions. It is of interest to note that there was a considerable reduction in the range of the final set of model predictions compared to the initial set. Predictions were provided from 11 different models and some of these have been used in this analysis. Unfortunately, there were no estimates given of the uncertainties on the model predictions

  14. A fuzzy mathematical model of West Java population with logistic growth model

    Science.gov (United States)

    Nurkholipah, N. S.; Amarti, Z.; Anggriani, N.; Supriatna, A. K.

    2018-03-01

    In this paper we develop a mathematics model of population growth in the West Java Province Indonesia. The model takes the form as a logistic differential equation. We parameterize the model using several triples of data, and choose the best triple which has the smallest Mean Absolute Percentage Error (MAPE). The resulting model is able to predict the historical data with a high accuracy and it also able to predict the future of population number. Predicting the future population is among the important factors that affect the consideration is preparing a good management for the population. Several experiment are done to look at the effect of impreciseness in the data. This is done by considering a fuzzy initial value to the crisp model assuming that the model propagates the fuzziness of the independent variable to the dependent variable. We assume here a triangle fuzzy number representing the impreciseness in the data. We found that the fuzziness may disappear in the long-term. Other scenarios also investigated, such as the effect of fuzzy parameters to the crisp initial value of the population. The solution of the model is obtained numerically using the fourth-order Runge-Kutta scheme.

  15. Estimating genetic covariance functions assuming a parametric correlation structure for environmental effects

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2001-11-01

    Full Text Available Abstract A random regression model for the analysis of "repeated" records in animal breeding is described which combines a random regression approach for additive genetic and other random effects with the assumption of a parametric correlation structure for within animal covariances. Both stationary and non-stationary correlation models involving a small number of parameters are considered. Heterogeneity in within animal variances is modelled through polynomial variance functions. Estimation of parameters describing the dispersion structure of such model by restricted maximum likelihood via an "average information" algorithm is outlined. An application to mature weight records of beef cow is given, and results are contrasted to those from analyses fitting sets of random regression coefficients for permanent environmental effects.

  16. A statistical model for aggregating judgments by incorporating peer predictions

    OpenAIRE

    McCoy, John; Prelec, Drazen

    2017-01-01

    We propose a probabilistic model to aggregate the answers of respondents answering multiple-choice questions. The model does not assume that everyone has access to the same information, and so does not assume that the consensus answer is correct. Instead, it infers the most probable world state, even if only a minority vote for it. Each respondent is modeled as receiving a signal contingent on the actual world state, and as using this signal to both determine their own answer and predict the ...

  17. Lip colour affects perceived sex typicality and attractiveness of human faces.

    Science.gov (United States)

    Stephen, Ian D; McKeegan, Angela M

    2010-01-01

    The luminance contrast between facial features and facial skin is greater in women than in men, and women's use of make-up enhances this contrast. In black-and-white photographs, increased luminance contrast enhances femininity and attractiveness in women's faces, but reduces masculinity and attractiveness in men's faces. In Caucasians, much of the contrast between the lips and facial skin is in redness. Red lips have been considered attractive in women in geographically and temporally diverse cultures, possibly because they mimic vasodilation associated with sexual arousal. Here, we investigate the effects of lip luminance and colour contrast on the attractiveness and sex typicality (masculinity/femininity) of human faces. In a Caucasian sample, we allowed participants to manipulate the colour of the lips in colour-calibrated face photographs along CIELab L* (light--dark), a* (red--green), and b* (yellow--blue) axes to enhance apparent attractiveness and sex typicality. Participants increased redness contrast to enhance femininity and attractiveness of female faces, but reduced redness contrast to enhance masculinity of men's faces. Lip blueness was reduced more in female than male faces. Increased lightness contrast enhanced the attractiveness of both sexes, and had little effect on perceptions of sex typicality. The association between lip colour contrast and attractiveness in women's faces may be attributable to its association with oxygenated blood perfusion indicating oestrogen levels, sexual arousal, and cardiac and respiratory health.

  18. Typical and Atypical Dementia Family Caregivers: Systematic and Objective Comparisons

    Science.gov (United States)

    Nichols, Linda O.; Martindale-Adams, Jennifer; Burns, Robert; Graney, Marshall J.; Zuber, Jeffrey

    2011-01-01

    This systematic, objective comparison of typical (spouse, children) and atypical (in-law, sibling, nephew/niece, grandchild) dementia family caregivers examined demographic, caregiving and clinical variables. Analysis was of 1,476 caregivers, of whom 125 were atypical, from the Resources for Enhancing Alzheimer's Caregivers Health (REACH I and II)…

  19. Typically Diverse: The Nature of Urban Agriculture in South Australia

    Directory of Open Access Journals (Sweden)

    Georgia Pollard

    2018-03-01

    Full Text Available In our visions of the future, urban agriculture has long been considered an integral part of the ‘sustainable city’. Yet urban agriculture is an incredibly diverse and variable field of study, and many practical aspects remain overlooked and understudied. This paper explores the economic sustainability of urban agriculture by focusing on the physical, practical, and economic aspects of home food gardens in South Australia. New data from the Edible Gardens project online survey is presented on a broad range of current garden setups, including a figure illustrating the statistically typical South Australian food garden. The differences between the survey data and a recent optimized garden model further highlight the gap in knowledge regarding existing home food gardens. With regard to the financial accessibility and economic sustainability of home food gardens, there is also still much more work to be done. Although saving money is a top motivation, with many survey respondents believing that they do succeed in saving money, it remains to be seen whether their current gardening practices support this aspiration. Measurement of the full costs of different gardens would allow for better predictions of whether growing food can save household’s money and under what circumstances.

  20. Panel Smooth Transition Regression Models

    DEFF Research Database (Denmark)

    González, Andrés; Terasvirta, Timo; Dijk, Dick van

    We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...