WorldWideScience

Sample records for source models spanning

  1. The Influence of Dietary Fat Source on Life Span in Calorie Restricted Mice.

    Science.gov (United States)

    López-Domínguez, José A; Ramsey, Jon J; Tran, Dianna; Imai, Denise M; Koehne, Amanda; Laing, Steven T; Griffey, Stephen M; Kim, Kyoungmi; Taylor, Sandra L; Hagopian, Kevork; Villalba, José M; López-Lluch, Guillermo; Navas, Plácido; McDonald, Roger B

    2015-10-01

    Calorie restriction (CR) without malnutrition extends life span in several animal models. It has been proposed that a decrease in the amount of polyunsaturated fatty acids (PUFAs), and especially n-3 fatty acids, in membrane phospholipids may contribute to life span extension with CR. Phospholipid PUFAs are sensitive to dietary fatty acid composition, and thus, the purpose of this study was to determine the influence of dietary lipids on life span in CR mice. C57BL/6J mice were assigned to four groups (a 5% CR control group and three 40% CR groups) and fed diets with soybean oil (high in n-6 PUFAs), fish oil (high in n-3 PUFAs), or lard (high in saturated and monounsaturated fatty acids) as the primary lipid source. Life span was increased (p Life span was also increased (p life span in mice on CR, and suggest that a diet containing a low proportion of PUFAs and high proportion of monounsaturated and saturated fats may maximize life span in animals maintained on CR. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Modelling ecosystem service flows under uncertainty with stochiastic SPAN

    Science.gov (United States)

    Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.

    2012-01-01

    Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.

  3. The rate of source memory decline across the adult life span.

    Science.gov (United States)

    Cansino, Selene; Estrada-Manilla, Cinthya; Hernández-Ramos, Evelia; Martínez-Galindo, Joyce Graciela; Torres-Trejo, Frine; Gómez-Fernández, Tania; Ayala-Hernández, Mariana; Osorio, David; Cedillo-Tinoco, Melisa; Garcés-Flores, Lissete; Gómez-Melgarejo, Sandra; Beltrán-Palacios, Karla; Guadalupe García-Lázaro, Haydée; García-Gutiérrez, Fabiola; Cadena-Arenas, Yadira; Fernández-Apan, Luisa; Bärtschi, Andrea; Resendiz-Vera, Julieta; Rodríguez-Ortiz, María Dolores

    2013-05-01

    Previous studies have suggested that the ability to remember contextual information related to specific episodic experiences declines with advancing age; however, the exact moment in the adult life span when this deficit begins is still controversial. Source memory for spatial information was tested in a life span sample of 1,500 adults between the ages of 21 and 80. Initially, images of common objects were randomly presented on one quadrant of a screen while the participants judged whether they were natural or artificial. During the retrieval phase, these same images were mixed with new ones, and all images were displayed in the center of the screen. The participants were asked to judge whether each image was new or old, and whether it was old, to indicate in which quadrant of the screen it had originally been presented. Source accuracy decreased linearly with advancing age at a rate of 0.6% per year across all decades even after controlling for educational level; this decline was unaffected by sex. These results reveal that either spatial information becomes less efficiently bound to episodic representations over time or that the ability to retrieve this information decreases gradually throughout the adult life span.

  4. An Approach for Maintaining Models of an E-Commercspan>e Collaboration

    NARCIS (Netherlands)

    Bodenstaff, L.; Wombacher, Andreas; Reichert, M.U.; Wieringa, Roelf J.

    To keep an overview on complex E-Commercspan>e collaborations several models are used to describe them. When models overlap in describing a collaboration, the overlapping information should not contradict. Models are of different nature and maintained by different people. Therefore, keeping model-overlap

  5. Tor1/Sch9-regulated carbon source substitution is as effective as calorie restriction in life span extension.

    Directory of Open Access Journals (Sweden)

    Min Wei

    2009-05-01

    Full Text Available The effect of calorie restriction (CR on life span extension, demonstrated in organisms ranging from yeast to mice, may involve the down-regulation of pathways, including Tor, Akt, and Ras. Here, we present data suggesting that yeast Tor1 and Sch9 (a homolog of the mammalian kinases Akt and S6K is a central component of a network that controls a common set of genes implicated in a metabolic switch from the TCA cycle and respiration to glycolysis and glycerol biosynthesis. During chronological survival, mutants lacking SCH9 depleted extracellular ethanol and reduced stored lipids, but synthesized and released glycerol. Deletion of the glycerol biosynthesis genes GPD1, GPD2, or RHR2, among the most up-regulated in long-lived sch9Delta, tor1Delta, and ras2Delta mutants, was sufficient to reverse chronological life span extension in sch9Delta mutants, suggesting that glycerol production, in addition to the regulation of stress resistance systems, optimizes life span extension. Glycerol, unlike glucose or ethanol, did not adversely affect the life span extension induced by calorie restriction or starvation, suggesting that carbon source substitution may represent an alternative to calorie restriction as a strategy to delay aging.

  6. High-Energy, Multi-Octave-Spanning Mid-IR Sources via Adiabatic Difference Frequency Generation

    Science.gov (United States)

    2016-10-17

    MASSACHUSETTS AVE CAMBRIDGE , MA 02139-4301 US 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) AF Office...ADFG) stage, illustrated in Fig. 2. This system represents a very simple extension of a near-IR OPCPA system to octave-spanning mid-IR, requiring...retrieved, as shown in Fig. 10. For illustration , 3 pulse shapes were selected. First, a simple linear chirp was applied to show that the pulse can be

  7. Aeroelastic stability of full-span tiltrotor aircraft model in forward flight

    Directory of Open Access Journals (Sweden)

    Zhiquan LI

    2017-12-01

    Full Text Available The existing full-span models of the tiltrotor aircraft adopted the rigid blade model without considering the coupling relationship among the elastic blade, wing and fuselage. To overcome the limitations of the existing full-span models and improve the precision of aeroelastic analysis of tiltrotor aircraft in forward flight, the aeroelastic stability analysis model of full-span tiltrotor aircraft in forward flight has been presented in this paper by considering the coupling among elastic blade, wing, fuselage and various components. The analytical model is validated by comparing with the calculation results and experimental data in the existing references. The influence of some structural parameters, such as the fuselage degrees of freedom, relative displacement between the hub center and the gravity center, and nacelle length, on the system stability is also investigated. The results show that the fuselage degrees of freedom decrease the critical stability velocity of tiltrotor aircraft, and the variation of the structural parameters has great influence on the system stability, and the instability form of system can change between the anti-symmetric and symmetric wing motions of vertical and chordwise bending. Keywords: Aeroelastic stability, Forward flight, Full-span model, Modal analysis, Tiltrotor aircraft

  8. Truth-telling and Nash equilibria in minimum cost spanning tree models

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2012-01-01

    In this paper we consider the minimum cost spanning tree model. We assume that a central planner aims at implementing a minimum cost spanning tree not knowing the true link costs. The central planner sets up a game where agents announce link costs, a tree is chosen and costs are allocated according...... to the rules of the game. We characterize ways of allocating costs such that true announcements constitute Nash equilibria both in case of full and incomplete information. In particular, we find that the Shapley rule based on the irreducible cost matrix is consistent with truthful announcements while a series...

  9. Working Memory Span Development: A Time-Based Resource-Sharing Model Account

    Science.gov (United States)

    Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie

    2009-01-01

    The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…

  10. A Preemptive Link State Spanning Tree Source Routing Scheme for Opportunistic Data Forwarding in MANET

    OpenAIRE

    R. Poonkuzhali; M. Y. Sanavullah; A. Sabari

    2014-01-01

    Opportunistic Data Forwarding (ODF) has drawn much attention in mobile adhoc networking research in recent years. The effectiveness of ODF in MANET depends on a suitable routing protocol which provides a powerful source routing services. PLSR is featured by source routing, loop free and small routing overhead. The update messages in PLSR are integrated into a tree structure and no need to time stamp routing updates which reduces the routing overhead.

  11. A Comparative Assessment of Aerodynamic Models for Buffeting and Flutter of Long-Span Bridges

    Directory of Open Access Journals (Sweden)

    Igor Kavrakov

    2017-12-01

    Full Text Available Wind-induced vibrations commonly represent the leading criterion in the design of long-span bridges. The aerodynamic forces in bridge aerodynamics are mainly based on the quasi-steady and linear unsteady theory. This paper aims to investigate different formulations of self-excited and buffeting forces in the time domain by comparing the dynamic response of a multi-span cable-stayed bridge during the critical erection condition. The bridge is selected to represent a typical reference object with a bluff concrete box girder for large river crossings. The models are viewed from a perspective of model complexity, comparing the influence of the aerodynamic properties implied in the aerodynamic models, such as aerodynamic damping and stiffness, fluid memory in the buffeting and self-excited forces, aerodynamic nonlinearity, and aerodynamic coupling on the bridge response. The selected models are studied for a wind-speed range that is typical for the construction stage for two levels of turbulence intensity. Furthermore, a simplified method for the computation of buffeting forces including the aerodynamic admittance is presented, in which rational approximation is avoided. The critical flutter velocities are also compared for the selected models under laminar flow. Keywords: Buffeting, Flutter, Long-span bridges, Bridge aerodynamics, Bridge aeroelasticity, Erection stage

  12. Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy

    Science.gov (United States)

    Hall, Matthew L.; Bavelier, Daphne

    2011-01-01

    Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory--perception, encoding, and recall--in this effect. The present study…

  13. Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy

    OpenAIRE

    Hall, Matthew L.

    2011-01-01

    Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory – perception, encoding, and recall – in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English was used for perception, memory encoding, and recall in hearing ASL-English b...

  14. Short-term memory stages in sign vs. speech: The source of the serial span discrepancy

    OpenAIRE

    Hall, Matthew L.; Bavelier, Daphné

    2011-01-01

    Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory – perception, encoding, and recall – in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English is used for perception, memory encoding, and recall in hearing ASL-English bi...

  15. Correspondence between spanning trees and the Ising model on a square lattice

    Science.gov (United States)

    Viswanathan, G. M.

    2017-06-01

    An important problem in statistical physics concerns the fascinating connections between partition functions of lattice models studied in equilibrium statistical mechanics on the one hand and graph theoretical enumeration problems on the other hand. We investigate the nature of the relationship between the number of spanning trees and the partition function of the Ising model on the square lattice. The spanning tree generating function T (z ) gives the spanning tree constant when evaluated at z =1 , while giving the lattice green function when differentiated. It is known that for the infinite square lattice the partition function Z (K ) of the Ising model evaluated at the critical temperature K =Kc is related to T (1 ) . Here we show that this idea in fact generalizes to all real temperatures. We prove that [Z(K ) s e c h 2 K ] 2=k exp[T (k )] , where k =2 tanh(2 K )s e c h (2 K ) . The identical Mahler measure connects the two seemingly disparate quantities T (z ) and Z (K ) . In turn, the Mahler measure is determined by the random walk structure function. Finally, we show that the the above correspondence does not generalize in a straightforward manner to nonplanar lattices.

  16. A Statistical Model for Generating a Population of Unclassified Objects and Radiation Signatures Spanning Nuclear Threats

    International Nuclear Information System (INIS)

    Nelson, K.; Sokkappa, P.

    2008-01-01

    This report describes an approach for generating a simulated population of plausible nuclear threat radiation signatures spanning a range of variability that could be encountered by radiation detection systems. In this approach, we develop a statistical model for generating random instances of smuggled nuclear material. The model is based on physics principles and bounding cases rather than on intelligence information or actual threat device designs. For this initial stage of work, we focus on random models using fissile material and do not address scenarios using non-fissile materials. The model has several uses. It may be used as a component in a radiation detection system performance simulation to generate threat samples for injection studies. It may also be used to generate a threat population to be used for training classification algorithms. In addition, we intend to use this model to generate an unclassified 'benchmark' threat population that can be openly shared with other organizations, including vendors, for use in radiation detection systems performance studies and algorithm development and evaluation activities. We assume that a quantity of fissile material is being smuggled into the country for final assembly and that shielding may have been placed around the fissile material. In terms of radiation signature, a nuclear weapon is basically a quantity of fissile material surrounded by various layers of shielding. Thus, our model of smuggled material is expected to span the space of potential nuclear weapon signatures as well. For computational efficiency, we use a generic 1-dimensional spherical model consisting of a fissile material core surrounded by various layers of shielding. The shielding layers and their configuration are defined such that the model can represent the potential range of attenuation and scattering that might occur. The materials in each layer and the associated parameters are selected from probability distributions that span the

  17. Open source molecular modeling.

    Science.gov (United States)

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  18. A chip-level modeling approach for rail span collapse and survivability analyses

    International Nuclear Information System (INIS)

    Marvis, D.G.; Alexander, D.R.; Dinger, G.L.

    1989-01-01

    A general semiautomated analysis technique has been developed for analyzing rail span collapse and survivability of VLSI microcircuits in high ionizing dose rate radiation environments. Hierarchical macrocell modeling permits analyses at the chip level and interactive graphical postprocessing provides a rapid visualization of voltage, current and power distributions over an entire VLSIC. The technique is demonstrated for a 16k C MOS/SOI SRAM and a CMOS/SOS 8-bit multiplier. The authors also present an efficient method to treat memory arrays as well as a three-dimensional integration technique to compute sapphire photoconduction from the design layout

  19. Effects of low dose rate irradiation on life span prolongation of human premature-aging syndrome model mice

    International Nuclear Information System (INIS)

    Nomura, Takaharu

    2006-01-01

    We previously showed that Type II diabetes model mice prolonged of their life span by life long low dose rate irradiation. We also found that antioxidant function in variety tissues of some strain of mice were enhancement after low dose/low dose rate irradiation. The prolongation of life span might depend on certain damaged level of reactive oxygen species. We thought the effect of the prolongation was due to the enhancement of the antioxidant activities after irradiation. We investigated whether the enhancement of antioxidant activities after low dose rate irradiation had an effect on life span prolongation. Four-week-old female human premature-aging syndrome model mice, kl/kl (klotho) mice, which the life span of this model mouse is about 65 days, were irradiated with gamma rays at 0.35, 0.70 or 1.2 mGy/hr. The 0.70 mGy/hr-irradiated group remarkably effected on the prolongation of their life span. Some mice of the group were extremely survived for about and more 100 days. Antioxidant activities in the irradiated groups were enhancement by low dose rate irradiation, however the dependence of the dose rates were not clearly difference. These results suggest that the antioxidant activities in this model mouse were enhanced by the low dose rate irradiation, and may make it possible to prolong the life span of this mouse. (author)

  20. Boundary Spanning

    DEFF Research Database (Denmark)

    Zølner, Mette

    The paper explores how locals span boundaries between corporate and local levels. The aim is to better comprehend potentialities and challenges when MNCs draws on locals’ culture specific knowledge. The study is based on an in-depth, interpretive case study of boundary spanning by local actors in...... approach with pattern matching is a way to shed light on the tacit local knowledge that organizational actors cannot articulate and that an exclusively inductive research is not likely to unveil....

  1. Solar radiation transmissivity of a single-span greenhouse through measurements on scale models

    International Nuclear Information System (INIS)

    Papadakis, G.; Manolakos, D.; Kyritsis, S.

    1998-01-01

    The solar transmissivity of a single-span greenhouse has been investigated experimentally using a scale model, of dimensions 40 cm width and 80 cm length. The solar transmissivity was measured at 48 positions on the “ground” surface of the scale model using 48 small silicon solar cells. The greenhouse model was positioned horizontally on a specially made goniometric mechanism. In this way, the greenhouse azimuth could be changed so that typical days of the year could be simulated using different combinations of greenhouse azimuth and the position of the sun in the sky. The measured solar transmissivity distribution at the “ground” surface and the average greenhouse solar transmissivity are presented and analysed, for characteristic days of the year, for winter and summer for a latitude of 37°58′ (Athens, Greece). It is shown that for the latitude of 37°58′ N during winter, the E–W orientation is preferable to the N–S one. The side walls, and especially the East and West ones for the E–W orientation, reduce considerably the greenhouse transmissivity at areas close to the walls for long periods of the day when the angle of incidence of the solar rays to these walls is large. (author)

  2. The concentric model of human working memory: A validation study using complex span and updating tasks

    Directory of Open Access Journals (Sweden)

    Velichkovsky B. B.

    2017-09-01

    Full Text Available Background. Working memory (WM seems to be central to most forms of high-level cognition. This fact is fueling the growing interest in studying its structure and functional organization. The influential “concentric model” (Oberauer, 2002 suggests that WM contains a processing component and two storage components with different capacity limitations and sensitivity to interference. There is, to date, only limited support for the concentric model in the research literature, and it is limited to a number of specially designed tasks. Objective. In the present paper, we attempted to validate the concentric model by testing its major predictions using complex span and updating tasks in a number of experimental paradigms. Method. The model predictions were tested with the help of review of data obtained primarily in our own experiments in several research domains, including Sternberg’s additive factors method; factor structure of WM; serial position effects in WM; and WM performance in a sample with episodic long-term memory deficits. Results. Predictions generated by the concentric model were shown to hold in all these domains. In addition, several new properties of WM were identified. In particular, we recently found that WM indeed contains a processing component which functions independent of storage components. In turn, the latter were found to form a storage hierarchy which balances fast access to selected items, with the storing of large amounts of potentially relevant information. Processing and storage in WM were found to be dependent on shared cognitive resources which are dynamically allocated between WM components according to actual task requirements. e implications of these findings for the theory of WM are discussed. Conclusion. The concentric model was shown to be valid with respect to standard WM tasks. The concentric model others promising research perspectives for the study of higher- order cognition, including underlying

  3. Photovoltaic sources modeling

    CERN Document Server

    Petrone, Giovanni; Spagnuolo, Giovanni

    2016-01-01

    This comprehensive guide surveys all available models for simulating a photovoltaic (PV) generator at different levels of granularity, from cell to system level, in uniform as well as in mismatched conditions. Providing a thorough comparison among the models, engineers have all the elements needed to choose the right PV array model for specific applications or environmental conditions matched with the model of the electronic circuit used to maximize the PV power production.

  4. Use of linear model analysis techniques in the evaluation of radiation effects on the life span of the beagle

    International Nuclear Information System (INIS)

    Angleton, G.M.; Lee, A.C.; Benjamin, S.A.

    1986-01-01

    The dependency of the beagle-dog life span on level of and age at exposure to 60 Co gamma radiation was analyzed by several techniques; one of these methods was linear model analysis. Beagles of both sexes were given single, bilateral exposures at 8, 28, or 55 days postcoitus (dpc) or at 2, 70, or 365 days postpartum (dpp). Dogs exposed at 8, 28, or 55 dpc or at 2 dpp received 0, 20, or 100 R, whereas those exposed at 70 or 365 dpp received 0 or 100 R. Beagles were designated initially either as sacrifice or as life-span animals. All deaths of life-span study animals were classified as spontaneous, hence for this group the mean age of death was a quantitative response that can be analyzed by linear model analysis techniques. Such analyses for each age group were performed, taking into account differences due to sex, linear and quadratic dependency on dose, and interaction between sex and dose. At this time most of the animals have reached 11 years of age. No significant effects of radiation on mean life span have been detected. 6 refs., 3 figs., 3 tabs

  5. Half-Watt average power femtosecond source spanning 3-8 µm based on subharmonic generation in GaAs

    Science.gov (United States)

    Smolski, Viktor; Vasilyev, Sergey; Moskalev, Igor; Mirov, Mike; Ru, Qitian; Muraviev, Andrey; Schunemann, Peter; Mirov, Sergey; Gapontsev, Valentin; Vodopyanov, Konstantin

    2018-06-01

    Frequency combs with a wide instantaneous spectral span covering the 3-20 µm molecular fingerprint region are highly desirable for broadband and high-resolution frequency comb spectroscopy, trace molecular detection, and remote sensing. We demonstrate a novel approach for generating high-average-power middle-infrared (MIR) output suitable for producing frequency combs with an instantaneous spectral coverage close to 1.5 octaves. Our method is based on utilizing a highly-efficient and compact Kerr-lens mode-locked Cr2+:ZnS laser operating at 2.35-µm central wavelength with 6-W average power, 77-fs pulse duration, and high 0.9-GHz repetition rate; to pump a degenerate (subharmonic) optical parametric oscillator (OPO) based on a quasi-phase-matched GaAs crystal. Such subharmonic OPO is a nearly ideal frequency converter capable of extending the benefits of frequency combs based on well-established mode-locked pump lasers to the MIR region through rigorous, phase- and frequency-locked down conversion. We report a 0.5-W output in the form of an ultra-broadband spectrum spanning 3-8 µm measured at 50-dB level.

  6. Time domain models for damping-controlled fluidelastic instability forces in multi-span tubes with loose supports

    International Nuclear Information System (INIS)

    Hassan, M.A.; Rogers, R.J.; Gerber, A.G.

    2009-01-01

    This paper presents simulations of a loosely supported multi-span tube subjected to turbulence and fluidelastic instability forces. Several time-domain fluid force models simulating the damping controlled fluidelastic instability mechanism in tube arrays have been presented. These models include the negative damping model based on the Connors equation, fluid force coefficient-based models (Chen and Tanaka and Takahara), and two semi-analytical models (Price and Paidoussis; and Lever and Weaver) were implemented in an in-house finite code. Time domain modeling challenges for each of these theories were discussed. The implemented models were validated against available experimental data. The linear simulations showed that the Connors-equation based model exhibits the most conservative prediction of the critical flow velocity when the recommended design values for the Connors equation were used. The models were then utilized to simulate the nonlinear response of a three-span cantilever tube in a square lattice bar support subjected to air crossflow. The tube was subjected to a single-phase flow passing over one of the tube's spans. For each of these models the flow velocity and the support clearance were varied. Special attention was paid to the tube/support interaction parameters that affect wear, such as impact forces, contact ratio, and normal work rate. As the prediction of the linear threshold varies depending on the utilized model, the nonlinear response also differs. The investigated models exhibit similar response characteristics for the impact force, tip lift response, and work rate. Simulation results show that the Connors-based model underestimates the response and the tube/support interaction parameters for the loose support case. (author)

  7. Blocked edges on Eulerian maps and mobiles: application to spanning trees, hard particles and the Ising model

    International Nuclear Information System (INIS)

    Bouttier, J; Francesco, P Di; Guitter, E

    2007-01-01

    We introduce Eulerian maps with blocked edges as a general way to implement statistical matter models on random maps by a modification of intrinsic distances. We show how to code these dressed maps by means of mobiles, i.e. decorated trees with labelled vertices, leading to a closed system of recursion relations for their generating functions. We discuss particular solvable cases in detail, as well as various applications of our method to several statistical systems such as spanning trees on quadrangulations, mutually excluding particles on Eulerian triangulations or the Ising model on quadrangulations

  8. Experimental modeling of flow-induced vibration of multi-span U-tubes in a CANDU steam generator

    International Nuclear Information System (INIS)

    Mohany, A.; Feenstra, P.; Janzen, V.P.; Richard, R.

    2009-01-01

    Flow-induced vibration of the tubes in a nuclear steam generator is a concern for designers who are trying to increase the life span of these units. The dominant excitation mechanisms are fluidelastic instability and random turbulence excitation. The outermost U-bend region of the tubes is of greatest concern because the flow is almost perpendicular to the tube axis and the unsupported span is relatively long. The support system in this region must be well designed in order to minimize fretting wear of the tubes at the support locations. Much of the previous testing was conducted on straight single-span or cantilevered tubes in cross-flow. However, the dynamic response of steam generator multi-span U-tubes with clearance supports is expected to be different. Accurate modeling of the tube dynamics is important to properly simulate the dynamic interaction of the tube and supports. This paper describes a test program that was developed to measure the dynamic response of a bundle of steam generator U-tubes with Anti-Vibration Bar (AVB) supports, subjected to Freon two-phase cross-flow. The tube bundle has similar geometrical conditions to those expected for future CANDU steam generators. Future steam generators will be larger than previous CANDU steam generators, nearly twice the heat transfer area, with significant changes in process conditions in the U-bend region, such as increased steam quality and a broader range of flow velocities. This test program was initiated at AECL to demonstrate that the tube support design for future CANDU steam generators will meet the stringent requirements associated with a 60 year design life. The main objective of the tests is to address the issue of in-plane and out-of-plane fluidelastic instability and random turbulent excitation of a U-tube bundle with Anti-Vibration Bar (AVB) supports. Details of the test rig, measurement techniques and preliminary instrumentation results are described in the paper. (author)

  9. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  10. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  11. Bifactor Modeling of the Positive and Negative Syndrome Scale: Generalized Psychosis Spans Schizoaffective, Bipolar, and Schizophrenia Diagnoses.

    Science.gov (United States)

    Anderson, Ariana E; Marder, Stephen; Reise, Steven P; Savitz, Adam; Salvadore, Giacomo; Fu, Dong Jing; Li, Qingqin; Turkoz, Ibrahim; Han, Carol; Bilder, Robert M

    2018-02-06

    Common genetic variation spans schizophrenia, schizoaffective and bipolar disorders, but historically, these syndromes have been distinguished categorically. A symptom dimension shared across these syndromes, if such a general factor exists, might provide a clearer target for understanding and treating mental illnesses that share core biological bases. We tested the hypothesis that a bifactor model of the Positive and Negative Syndrome Scale (PANSS), containing 1 general factor and 5 specific factors (positive, negative, disorganized, excited, anxiety), explains the cross-diagnostic structure of symptoms better than the traditional 5-factor model, and examined the extent to which a general factor reflects the overall severity of symptoms spanning diagnoses in 5094 total patients with a diagnosis of schizophrenia, schizoaffective, and bipolar disorder. The bifactor model provided superior fit across diagnoses, and was closer to the "true" model, compared to the traditional 5-factor model (Vuong test; P schizoaffective, and bipolar disorder. © The Author(s) 2018. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com

  12. Statistical modeling of biomedical corpora: mining the Caenorhabditis Genetic Center Bibliography for genes related to life span

    Directory of Open Access Journals (Sweden)

    Jordan MI

    2006-05-01

    Full Text Available Abstract Background The statistical modeling of biomedical corpora could yield integrated, coarse-to-fine views of biological phenomena that complement discoveries made from analysis of molecular sequence and profiling data. Here, the potential of such modeling is demonstrated by examining the 5,225 free-text items in the Caenorhabditis Genetic Center (CGC Bibliography using techniques from statistical information retrieval. Items in the CGC biomedical text corpus were modeled using the Latent Dirichlet Allocation (LDA model. LDA is a hierarchical Bayesian model which represents a document as a random mixture over latent topics; each topic is characterized by a distribution over words. Results An LDA model estimated from CGC items had better predictive performance than two standard models (unigram and mixture of unigrams trained using the same data. To illustrate the practical utility of LDA models of biomedical corpora, a trained CGC LDA model was used for a retrospective study of nematode genes known to be associated with life span modification. Corpus-, document-, and word-level LDA parameters were combined with terms from the Gene Ontology to enhance the explanatory value of the CGC LDA model, and to suggest additional candidates for age-related genes. A novel, pairwise document similarity measure based on the posterior distribution on the topic simplex was formulated and used to search the CGC database for "homologs" of a "query" document discussing the life span-modifying clk-2 gene. Inspection of these document homologs enabled and facilitated the production of hypotheses about the function and role of clk-2. Conclusion Like other graphical models for genetic, genomic and other types of biological data, LDA provides a method for extracting unanticipated insights and generating predictions amenable to subsequent experimental validation.

  13. EarthCube - Earth System Bridge: Spanning Scientific Communities with Interoperable Modeling Frameworks

    Science.gov (United States)

    Peckham, S. D.; DeLuca, C.; Gochis, D. J.; Arrigo, J.; Kelbert, A.; Choi, E.; Dunlap, R.

    2014-12-01

    In order to better understand and predict environmental hazards of weather/climate, ecology and deep earth processes, geoscientists develop and use physics-based computational models. These models are used widely both in academic and federal communities. Because of the large effort required to develop and test models, there is widespread interest in component-based modeling, which promotes model reuse and simplified coupling to tackle problems that often cross discipline boundaries. In component-based modeling, the goal is to make relatively small changes to models that make it easy to reuse them as "plug-and-play" components. Sophisticated modeling frameworks exist to rapidly couple these components to create new composite models. They allow component models to exchange variables while accommodating different programming languages, computational grids, time-stepping schemes, variable names and units. Modeling frameworks have arisen in many modeling communities. CSDMS (Community Surface Dynamics Modeling System) serves the academic earth surface process dynamics community, while ESMF (Earth System Modeling Framework) serves many federal Earth system modeling projects. Others exist in both the academic and federal domains and each satisfies design criteria that are determined by the community they serve. While they may use different interface standards or semantic mediation strategies, they share fundamental similarities. The purpose of the Earth System Bridge project is to develop mechanisms for interoperability between modeling frameworks, such as the ability to share a model or service component. This project has three main goals: (1) Develop a Framework Description Language (ES-FDL) that allows modeling frameworks to be described in a standard way so that their differences and similarities can be assessed. (2) Demonstrate that if a model is augmented with a framework-agnostic Basic Model Interface (BMI), then simple, universal adapters can go from BMI to a

  14. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  15. MetRxn: a knowledgebase of metabolites and reactions spanning metabolic models and databases

    Directory of Open Access Journals (Sweden)

    Kumar Akhil

    2012-01-01

    Full Text Available Abstract Background Increasingly, metabolite and reaction information is organized in the form of genome-scale metabolic reconstructions that describe the reaction stoichiometry, directionality, and gene to protein to reaction associations. A key bottleneck in the pace of reconstruction of new, high-quality metabolic models is the inability to directly make use of metabolite/reaction information from biological databases or other models due to incompatibilities in content representation (i.e., metabolites with multiple names across databases and models, stoichiometric errors such as elemental or charge imbalances, and incomplete atomistic detail (e.g., use of generic R-group or non-explicit specification of stereo-specificity. Description MetRxn is a knowledgebase that includes standardized metabolite and reaction descriptions by integrating information from BRENDA, KEGG, MetaCyc, Reactome.org and 44 metabolic models into a single unified data set. All metabolite entries have matched synonyms, resolved protonation states, and are linked to unique structures. All reaction entries are elementally and charge balanced. This is accomplished through the use of a workflow of lexicographic, phonetic, and structural comparison algorithms. MetRxn allows for the download of standardized versions of existing genome-scale metabolic models and the use of metabolic information for the rapid reconstruction of new ones. Conclusions The standardization in description allows for the direct comparison of the metabolite and reaction content between metabolic models and databases and the exhaustive prospecting of pathways for biotechnological production. This ever-growing dataset currently consists of over 76,000 metabolites participating in more than 72,000 reactions (including unresolved entries. MetRxn is hosted on a web-based platform that uses relational database models (MySQL.

  16. Development of the Circulation Control Flow Scheme Used in the NTF Semi-Span FAST-MAC Model

    Science.gov (United States)

    Jones, Gregory S.; Milholen, William E., II; Chan, David T.; Allan, Brian G.; Goodliff, Scott L.; Melton, Latunia P.; Anders, Scott G.; Carter, Melissa B.; Capone, Francis J.

    2013-01-01

    The application of a circulation control system for high Reynolds numbers was experimentally validated with the Fundamental Aerodynamic Subsonic Transonic Modular Active Control semi-span model in the NASA Langley National Transonic Facility. This model utilized four independent flow paths to modify the lift and thrust performance of a representative advanced transport type of wing. The design of the internal flow paths highlights the challenges associated with high Reynolds number testing in a cryogenic pressurized wind tunnel. Weight flow boundaries for the air delivery system were identified at mildly cryogenic conditions ranging from 0.1 to 10 lbm/sec. Results from the test verified system performance and identified solutions associated with the weight-flow metering system that are linked to internal perforated plates used to achieve flow uniformity at the jet exit.

  17. Three-dimensional dose-response models of competing risks and natural life span

    International Nuclear Information System (INIS)

    Raabe, O.G.

    1987-01-01

    Three-dimensional dose-rate/time/response surfaces for chronic exposure to carcinogens, toxicants, and ionizing radiation dramatically clarify the separate and interactive roles of competing risks. The three dimensions are average dose rate, exposure time, and risk. An illustration with computer graphics shows the contributions with the passage of time of the competing risks of death from radiation pneumonitis/fibrosis, lung cancer, and natural aging consequent to the inhalation of plutonium-239 dioxide by beagles. These relationships are further evaluated by mathematical stripping with three-dimensional illustrations that graphically show the resultant separate contribution of each fatal effect. Radiation pneumonitis predominates at high dose rates and lung cancer at intermediate dose rates. Low dose rates result in spontaneous deaths from natural aging, yielding a type of practical threshold for lung cancer induction. Risk assessment is benefited by the insights that become apparent with these three-dimensional models. The improved conceptualization afforded by them contributes to the planning and evaluation of epidemiological analyses and experimental studies involving chronic exposure to toxicants

  18. Finite element model updating of multi-span steel-arch-steel-girder bridges based on ambient vibrations

    Science.gov (United States)

    Hou, Tsung-Chin; Gao, Wei-Yuan; Chang, Chia-Sheng; Zhu, Guan-Rong; Su, Yu-Min

    2017-04-01

    The three-span steel-arch-steel-girder Jiaxian Bridge was newly constructed in 2010 to replace the former one that has been destroyed by Typhoon Sinlaku (2008, Taiwan). It was designed and built to continue the domestic service requirement, as well as to improve the tourism business of the Kaohsiung city government, Taiwan. This study aimed at establishing the baseline model of Jiaxian Bridge for hazardous scenario simulation such as typhoons, floods and earthquakes. Necessities of these precaution works were attributed to the inherent vulnerability of the sites: near fault and river cross. The uncalibrated baseline bridge model was built with structural finite element in accordance with the blueprints. Ambient vibration measurements were performed repeatedly to acquire the elastic dynamic characteristics of the bridge structure. Two frequency domain system identification algorithms were employed to extract the measured operational modal parameters. Modal shapes, frequencies, and modal assurance criteria (MAC) were configured as the fitting targets so as to calibrate/update the structural parameters of the baseline model. It has been recognized that different types of structural parameters contribute distinguishably to the fitting targets, as this study has similarly explored. For steel-arch-steel-girder bridges in particular this case, joint rigidity of the steel components was found to be dominant while material properties and section geometries relatively minor. The updated model was capable of providing more rational elastic responses of the bridge superstructure under normal service conditions as well as hazardous scenarios, and can be used for manage the health conditions of the bridge structure.

  19. Jet-Boundary and Plan-Form Corrections for Partial-Span Models with Reflection Plane, End Plate, or No End Plate in a Closed Circular Wind Tunnel

    Science.gov (United States)

    1946-06-01

    complete-span models. Such models are used to--best- advantage to determine the aerodynamic characteristics of wings, flaps, lateral-control devices, and...d~mensfons w? Indes> . . ... ,. ., I . .’ . .,. . , 84295 #aOO . ~- 4 g— --% ii E~ [039 E. miiw%“” NATto )w. AWIWRY mwl—nEmkMEwrK5 . faKw+- WIW m&J. I

  20. Physical modeling of river spanning rock structures: Evaluating interstitial flow, local hydraulics, downstream scour development, and structure stability

    Science.gov (United States)

    Collins, K.L.; Thornton, C.I.; Mefford, B.; Holmquist-Johnson, C. L.

    2009-01-01

    Rock weir and ramp structures uniquely serve a necessary role in river management: to meet water deliveries in an ecologically sound manner. Uses include functioning as low head diversion dams, permitting fish passage, creating habitat diversity, and stabilizing stream banks and profiles. Existing information on design and performance of in-stream rock structures does not provide the guidance necessary to implement repeatable and sustainable construction and retrofit techniques. As widespread use of rock structures increases, the need for reliable design methods with a broad range of applicability at individual sites grows as well. Rigorous laboratory testing programs were implemented at the U.S. Bureau of Reclamation (Reclamation) and at Colorado State University (CSU) as part of a multifaceted research project focused on expanding the current knowledge base and developing design methods to improve the success rate of river spanning rock structures in meeting project goals. Physical modeling at Reclamation is being used to measure, predict, and reduce interstitial flow through rock ramps. CSU is using physical testing to quantify and predict scour development downstream of rock weirs and its impact on the stability of rock structures. ?? 2009 ASCE.

  1. Assessing Model Characterization of Single Source ...

    Science.gov (United States)

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  2. Span of control matters.

    Science.gov (United States)

    Cathcart, Deb; Jeska, Susan; Karnas, Joan; Miller, Sue E; Pechacek, Judy; Rheault, Lolita

    2004-09-01

    Prompted by manager concerns about span of control, a large, integrated health system set out to determine if span of control really mattered. Was there something to it, or was it just an excuse for poor performance? A team of middle managers studied the problem and ultimately demonstrated a strong relationship between span of control and employee engagement. Consequently, it was decided to add 4 management positions to note the effect. One year later, positive changes were observed in employee engagement scores in all 4 areas. This study suggests careful review of manager spans of control to address the untoward effects of large spans of control on employee engagement.

  3. On the size of monotone span programs

    NARCIS (Netherlands)

    Nikov, V.S.; Nikova, S.I.; Preneel, B.; Blundo, C.; Cimato, S.

    2005-01-01

    Span programs provide a linear algebraic model of computation. Monotone span programs (MSP) correspond to linear secret sharing schemes. This paper studies the properties of monotone span programs related to their size. Using the results of van Dijk (connecting codes and MSPs) and a construction for

  4. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  5. Learning models for multi-source integration

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, S.; Knoblock, C.A.; Minton, S. [Univ. of Southern California/ISI, Marina del Rey, CA (United States)

    1996-12-31

    Because of the growing number of information sources available through the internet there are many cases in which information needed to solve a problem or answer a question is spread across several information sources. For example, when given two sources, one about comic books and the other about super heroes, you might want to ask the question {open_quotes}Is Spiderman a Marvel Super Hero?{close_quotes} This query accesses both sources; therefore, it is necessary to have information about the relationships of the data within each source and between sources to properly access and integrate the data retrieved. The SIMS information broker captures this type of information in the form of a model. All the information sources map into the model providing the user a single interface to multiple sources.

  6. Photovoltaic sources modeling and emulation

    CERN Document Server

    Piazza, Maria Carmela Di

    2012-01-01

    This book offers an extensive introduction to the modeling of photovoltaic generators and their emulation by means of power electronic converters will aid in understanding and improving design and setup of new PV plants.

  7. Decentralized Pricing in Minimum Cost Spanning Trees

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Moulin, Hervé; Østerdal, Lars Peter

    In the minimum cost spanning tree model we consider decentralized pricing rules, i.e. rules that cover at least the ecient cost while the price charged to each user only depends upon his own connection costs. We de ne a canonical pricing rule and provide two axiomatic characterizations. First......, the canonical pricing rule is the smallest among those that improve upon the Stand Alone bound, and are either superadditive or piece-wise linear in connection costs. Our second, direct characterization relies on two simple properties highlighting the special role of the source cost....

  8. A recursive method for calculating the total number of spanning trees and its applications in self-similar small-world scale-free network models

    Science.gov (United States)

    Ma, Fei; Su, Jing; Yao, Bing

    2018-05-01

    The problem of determining and calculating the number of spanning trees of any finite graph (model) is a great challenge, and has been studied in various fields, such as discrete applied mathematics, theoretical computer science, physics, chemistry and the like. In this paper, firstly, thank to lots of real-life systems and artificial networks built by all kinds of functions and combinations among some simpler and smaller elements (components), we discuss some helpful network-operation, including link-operation and merge-operation, to design more realistic and complicated network models. Secondly, we present a method for computing the total number of spanning trees. As an accessible example, we apply this method to space of trees and cycles respectively, and our results suggest that it is indeed a better one for such models. In order to reflect more widely practical applications and potentially theoretical significance, we study the enumerating method in some existing scale-free network models. On the other hand, we set up a class of new models displaying scale-free feature, that is to say, following P(k) k-γ, where γ is the degree exponent. Based on detailed calculation, the degree exponent γ of our deterministic scale-free models satisfies γ > 3. In the rest of our discussions, we not only calculate analytically the solutions of average path length, which indicates our models have small-world property being prevailing in amounts of complex systems, but also derive the number of spanning trees by means of the recursive method described in this paper, which clarifies our method is convenient to research these models.

  9. The Commercial Open Source Business Model

    Science.gov (United States)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  10. An analytic uranium sources model

    International Nuclear Information System (INIS)

    Singer, C.E.

    2001-01-01

    This document presents a method for estimating uranium resources as a continuous function of extraction costs and describing the uncertainty in the resulting fit. The estimated functions provide convenient extrapolations of currently available data on uranium extraction cost and can be used to predict the effect of resource depletion on future uranium supply costs. As such, they are a useful input for economic models of the nuclear energy sector. The method described here pays careful attention to minimizing built-in biases in the fitting procedure and defines ways to describe the uncertainty in the resulting fits in order to render the procedure and its results useful to the widest possible variety of potential users. (author)

  11. Life Span Developmental Approach

    OpenAIRE

    Ali Eryilmaz

    2011-01-01

    The Life Span Developmental Approach examines development of individuals which occurs from birth to death. Life span developmental approach is a multi-disciplinary approach related with disciplines like psychology, psychiatry, sociology, anthropology and geriatrics that indicates the fact that development is not completed in adulthood, it continues during the life course. Development is a complex process that consists of dying and death. This approach carefully investigates the development of...

  12. Lessons in the Design and Characterization Testing of the Semi-Span Super-Sonic Transport (S4T) Wind-Tunnel Model

    Science.gov (United States)

    2012-01-01

    This paper focuses on some of the more challenging design processes and characterization tests of the Semi-Span Super-Sonic Transport (S4T)-Active Controls Testbed (ACT). The model was successfully tested in four entries in the National Aeronautics and Space Administration Langley Transonic Dynamics Tunnel to satisfy the goals and objectives of the Fundamental Aeronautics Program Supersonic Project Aero-Propulso-Servo-Elastic effort. Due to the complexity of the S4T-ACT, only a small sample of the technical challenges for designing and characterizing the model will be presented. Specifically, the challenges encountered in designing the model include scaling the Technology Concept Airplane to model scale, designing the model fuselage, aileron actuator, and engine pylons. Characterization tests included full model ground vibration tests, wing stiffness measurements, geometry measurements, proof load testing, and measurement of fuselage static and dynamic properties.

  13. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  14. Balmorel open source energy system model

    DEFF Research Database (Denmark)

    Wiese, Frauke; Bramstoft, Rasmus; Koduvere, Hardi

    2018-01-01

    As the world progresses towards a cleaner energy future with more variable renewable energy sources, energy system models are required to deal with new challenges. This article describes design, development and applications of the open source energy system model Balmorel, which is a result...... of a long and fruitful cooperation between public and private institutions within energy system research and analysis. The purpose of the article is to explain the modelling approach, to highlight strengths and challenges of the chosen approach, to create awareness about the possible applications...... of Balmorel as well as to inspire to new model developments and encourage new users to join the community. Some of the key strengths of the model are the flexible handling of the time and space dimensions and the combination of operation and investment optimisation. Its open source character enables diverse...

  15. Santa Ana Winds of Southern California: Their Climatology and Variability Spanning 6.5 Decades from Regional Dynamical Modelling

    Science.gov (United States)

    Guzman-Morales, J.; Gershunov, A.

    2015-12-01

    Santa Ana Winds (SAWs) are an integral feature of the regional climate of Southern California/Northern Baja California region. In spite of their tremendous episodic impacts on the health, economy and mood of the region, climate-scale behavior of SAW is poorly understood. In the present work, we identify SAWs in mesoscale dynamical downscaling of a global reanalysis product and construct an hourly SAW catalogue spanning 65 years. We describe the long-term SAW climatology at relevant time-space resolutions, i.e, we developed local and regional SAW indices and analyse their variability on hourly, daily, annual, and multi-decadal timescales. Local and regional SAW indices are validated with available anemometer observations. Characteristic behaviors are revealed, e.g. the SAW intensity-duration relationship. At interdecadal time scales, we find that seasonal SAW activity is sensitive to prominent large-scale low-frequency modes of climate variability rooted in the tropical and north Pacific ocean-atmosphere system that are also known to affect the hydroclimate of this region. Lastly, we do not find any long-term trend in SAW frequency and intensity as previously reported. Instead, we identify a significant long-term trend in SAW behavior whereby contribution of extreme SAW events to total seasonal SAW activity has been increasing at the expense of moderate events. These findings motivate further investigation on SAW evolution in future climate and its impact on wildfires.

  16. The Influence On Factors In Attitudes Toward Acceptance Of The Information System Using Technology Acceptance Model TAM Case Study SPAN System In Indonesia

    Directory of Open Access Journals (Sweden)

    Donny Maha Putra

    2015-08-01

    Full Text Available Theoretically and practically Technology Acceptance Model TAM is a model that is considered most appropriate in explaining how the user receives a system. This study aimed to analyze the factors that influence the attitudes towards the acceptance of Sistem Perbendaharaan Anggaran Negara SPAN using TAM approach. The problems raised in this research aims to determine the attitude of the use of the transition process lagecy system to the new system which for many users create conflict in the process of adaptation. On the basis of this proposed theoretical models to test hypotheses using Structural Equation Model SEM and analysis tool using lisrel. This research was conducted in all offices DG of Treasury of Ministry of Finance with 210 respondents were chosen at random to represent each office. The results of this study prove 4 hypothesis is accepted from 8 hypothesis namely a a negative affect with the results demonstrabilty b computer self-efficacy with the output quality c computer self-efficacy with the perceived ease of use d perceived ease of use with the perceived of usefulness. Overall indicates that the application of the SPAN system in the Ministry of Finance of In Indonesia can be accepted by users.

  17. Faster universal modeling for two source classes

    NARCIS (Netherlands)

    Nowbakht, A.; Willems, F.M.J.; Macq, B.; Quisquater, J.-J.

    2002-01-01

    The Universal Modeling algorithms proposed in [2] for two general classes of finite-context sources are reviewed. The above methods were constructed by viewing a model structure as a partition of the context space and realizing that a partition can be reached through successive splits. Here we start

  18. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  19. Probabilistic forward model for electroencephalography source analysis

    International Nuclear Information System (INIS)

    Plis, Sergey M; George, John S; Jun, Sung C; Ranken, Doug M; Volegov, Petr L; Schmidt, David M

    2007-01-01

    Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates

  20. A model for superliminal radio sources

    International Nuclear Information System (INIS)

    Milgrom, M.; Bahcall, J.N.

    1977-01-01

    A geometrical model for superluminal radio sources is described. Six predictions that can be tested by observations are summarized. The results are in agreement with all the available observations. In this model, the Hubble constant is the only numerical parameter that is important in interpreting the observed rates of change of angular separations for small redshifts. The available observations imply that H 0 is less than 55 km/s/Mpc if the model is correct. (author)

  1. Life Span Developmental Approach

    Directory of Open Access Journals (Sweden)

    Ali Eryilmaz

    2011-03-01

    Full Text Available The Life Span Developmental Approach examines development of individuals which occurs from birth to death. Life span developmental approach is a multi-disciplinary approach related with disciplines like psychology, psychiatry, sociology, anthropology and geriatrics that indicates the fact that development is not completed in adulthood, it continues during the life course. Development is a complex process that consists of dying and death. This approach carefully investigates the development of individuals with respect to developmental stages. This developmental approach suggests that scientific disciplines should not explain developmental facts only with age changes. Along with aging, cognitive, biological, and socioemotional development throughout life should also be considered to provide a reasonable and acceptable context, guideposts, and reasonable expectations for the person. There are three important subjects whom life span developmental approach deals with. These are nature vs nurture, continuity vs discontinuity, and change vs stability. Researchers using life span developmental approach gather and produce knowledge on these three most important domains of individual development with their unique scientific methodology.

  2. Air quality dispersion models from energy sources

    International Nuclear Information System (INIS)

    Lazarevska, Ana

    1996-01-01

    Along with the continuing development of new air quality models that cover more complex problems, in the Clean Air Act, legislated by the US Congress, a consistency and standardization of air quality model applications were encouraged. As a result, the Guidelines on Air Quality Models were published, which are regularly reviewed by the Office of Air Quality Planning and Standards, EPA. These guidelines provide a basis for estimating the air quality concentrations used in accessing control strategies as well as defining emission limits. This paper presents a review and analysis of the recent versions of the models: Simple Terrain Stationary Source Model; Complex Terrain Dispersion Model; Ozone,Carbon Monoxide and Nitrogen Dioxide Models; Long Range Transport Model; Other phenomenon Models:Fugitive Dust/Fugitive Emissions, Particulate Matter, Lead, Air Pathway Analyses - Air Toxic as well as Hazardous Waste. 8 refs., 4 tabs., 2 ills

  3. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  4. The Life Span Dwelling

    OpenAIRE

    Hans-Peter Hebensperger-Hüther; Gabriele Franger-Huhle

    2014-01-01

    The paper presents the findings from a survey of 10 different experimental housing projects in Bavaria. In 2005 students of architecture and students of social work at the University of Applied Science in Coburg approached the topic of “Life Span Dwelling” using interdisciplinary research methods. The scope of the research ranges from urban planning concepts to common spaces in the different neighborhoods, documenting user satisfaction with the individual unit and feasibility of rooms offered...

  5. Span efficiency in hawkmoths.

    Science.gov (United States)

    Henningsson, Per; Bomphrey, Richard J

    2013-07-06

    Flight in animals is the result of aerodynamic forces generated as flight muscles drive the wings through air. Aerial performance is therefore limited by the efficiency with which momentum is imparted to the air, a property that can be measured using modern techniques. We measured the induced flow fields around six hawkmoth species flying tethered in a wind tunnel to assess span efficiency, ei, and from these measurements, determined the morphological and kinematic characters that predict efficient flight. The species were selected to represent a range in wingspan from 40 to 110 mm (2.75 times) and in mass from 0.2 to 1.5 g (7.5 times) but they were similar in their overall shape and their ecology. From high spatio-temporal resolution quantitative wake images, we extracted time-resolved downwash distributions behind the hawkmoths, calculating instantaneous values of ei throughout the wingbeat cycle as well as multi-wingbeat averages. Span efficiency correlated positively with normalized lift and negatively with advance ratio. Average span efficiencies for the moths ranged from 0.31 to 0.60 showing that the standard generic value of 0.83 used in previous studies of animal flight is not a suitable approximation of aerodynamic performance in insects.

  6. Social cognitive model of career self-management: toward a unifying view of adaptive career behavior across the life span.

    Science.gov (United States)

    Lent, Robert W; Brown, Steven D

    2013-10-01

    Social cognitive career theory (SCCT) currently consists of 4 overlapping, segmental models aimed at understanding educational and occupational interest development, choice-making, performance and persistence, and satisfaction/well-being. To this point, the theory has emphasized content aspects of career behavior, for instance, prediction of the types of activities, school subjects, or career fields that form the basis for people's educational/vocational interests and choice paths. However, SCCT may also lend itself to study of many process aspects of career behavior, including such issues as how people manage normative tasks and cope with the myriad challenges involved in career preparation, entry, adjustment, and change, regardless of the specific educational and occupational fields they inhabit. Such a process focus can augment and considerably expand the range of the dependent variables for which SCCT was initially designed. Building on SCCT's existing models, we present a social cognitive model of career self-management and offer examples of the adaptive, process behaviors to which it can be applied (e.g., career decision making/exploration, job searching, career advancement, negotiation of work transitions and multiple roles).

  7. Using Uncertainty Quantification to Guide Development and Improvements of a Regional-Scale Model of the Coastal Lowlands Aquifer System Spanning Texas, Louisiana, Mississippi, Alabama and Florida

    Science.gov (United States)

    Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.

    2017-12-01

    Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.

  8. Developing a Successful Open Source Training Model

    Directory of Open Access Journals (Sweden)

    Belinda Lopez

    2010-01-01

    Full Text Available Training programs for open source software provide a tangible, and sellable, product. A successful training program not only builds revenue, it also adds to the overall body of knowledge available for the open source project. By gathering best practices and taking advantage of the collective expertise within a community, it may be possible for a business to partner with an open source project to build a curriculum that promotes the project and supports the needs of the company's training customers. This article describes the initial approach used by Canonical, the commercial sponsor of the Ubuntu Linux operating system, to engage the community in the creation of its training offerings. We then discuss alternate curriculum creation models and some of the conditions that are necessary for successful collaboration between creators of existing documentation and commercial training providers.

  9. Open source integrated modeling environment Delta Shell

    Science.gov (United States)

    Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.

    2012-04-01

    In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.

  10. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  11. Dual boundary spanning

    DEFF Research Database (Denmark)

    Li-Ying, Jason

    2016-01-01

    The extant literature runs short in understanding openness of innovation regarding and the different pathways along which internal and external knowledge resources can be combined. This study proposes a unique typology for outside-in innovations based on two distinct ways of boundary spanning......: whether an innovation idea is created internally or externally and whether an innovation process relies on external knowledge resources. This yields four possible types of innovation, which represent the nuanced variation of outside-in innovations. Using historical data from Canada for 1945...

  12. Synergistic interactions between Drosophila orthologues of genes spanned by de novo human CNVs support multiple-hit models of autism.

    Science.gov (United States)

    Grice, Stuart J; Liu, Ji-Long; Webber, Caleb

    2015-03-01

    Autism spectrum disorders (ASDs) are highly heritable and characterised by deficits in social interaction and communication, as well as restricted and repetitive behaviours. Although a number of highly penetrant ASD gene variants have been identified, there is growing evidence to support a causal role for combinatorial effects arising from the contributions of multiple loci. By examining synaptic and circadian neurological phenotypes resulting from the dosage variants of unique human:fly orthologues in Drosophila, we observe numerous synergistic interactions between pairs of informatically-identified candidate genes whose orthologues are jointly affected by large de novo copy number variants (CNVs). These CNVs were found in the genomes of individuals with autism, including a patient carrying a 22q11.2 deletion. We first demonstrate that dosage alterations of the unique Drosophila orthologues of candidate genes from de novo CNVs that harbour only a single candidate gene display neurological defects similar to those previously reported in Drosophila models of ASD-associated variants. We then considered pairwise dosage changes within the set of orthologues of candidate genes that were affected by the same single human de novo CNV. For three of four CNVs with complete orthologous relationships, we observed significant synergistic effects following the simultaneous dosage change of gene pairs drawn from a single CNV. The phenotypic variation observed at the Drosophila synapse that results from these interacting genetic variants supports a concordant phenotypic outcome across all interacting gene pairs following the direction of human gene copy number change. We observe both specificity and transitivity between interactors, both within and between CNV candidate gene sets, supporting shared and distinct genetic aetiologies. We then show that different interactions affect divergent synaptic processes, demonstrating distinct molecular aetiologies. Our study illustrates

  13. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  14. Modeling of renewable hybrid energy sources

    Directory of Open Access Journals (Sweden)

    Dumitru Cristian Dragos

    2009-12-01

    Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.

  15. Wind tunnel investigation of a USB-STOL transport semi-span model. 2; CAD sekkei ni yoru USB-STOL ki hansai mokei no fudo shiken. 2

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, H; Okuyama, M; Fujieda, H; Fujita, T; Iwasaki, A [National Aerospace Laboratory, Tokyo (Japan)

    1994-11-01

    The Quiet Short Take-Off and Landing (QSTOL) Experimental Aircraft `ASKA` has been researched and developed by the National Aerospace Laboratory (NAL). The `ASKA` was based upon the airframe of the home produced C-1 tactical transport which was modified into the Upper Surface Blowing (USB) -powered high- lift STOL aircraft. The wing configuration, however, was not changed. Therefore, this Experimental Aircraft does not always have the optimum configuration of a USB-type aircraft. So the authors tried to improve the aerodynamic characteristics of the STOL Aircraft. This paper describes the investigations which have been conducted to improve the aerodynamic characteristics of a subsonic jet transport semi-span model with an Upper Surface Blown Flap system which has been newly designed using the NAL STOL-CAD program. The model had an 8.2{degree} swept wing of aspect ratio 10.0 and four turbofan engines with short USB nozzles. The tests were conducted in the NAL 2{times}2m Gust Wind Tunnel with closed section and results were obtained for several flap and slat deflections at jet momentum coefficients from 0 to 1.85. Compared with the aerodynamic characteristics of the `ASKA` model, we determined that the airframe weight can be reduced and the aerodynamic characteristics can be improved significantly. 14 refs., 44 figs.

  16. Analytical and Experimental Evaluation of Digital Control Systems for the Semi-Span Super-Sonic Transport (S4T) Wind Tunnel Model

    Science.gov (United States)

    Wieseman, Carol D.; Christhilf, David; Perry, Boyd, III

    2012-01-01

    An important objective of the Semi-Span Super-Sonic Transport (S4T) wind tunnel model program was the demonstration of Flutter Suppression (FS), Gust Load Alleviation (GLA), and Ride Quality Enhancement (RQE). It was critical to evaluate the stability and robustness of these control laws analytically before testing them and experimentally while testing them to ensure safety of the model and the wind tunnel. MATLAB based software was applied to evaluate the performance of closed-loop systems in terms of stability and robustness. Existing software tools were extended to use analytical representations of the S4T and the control laws to analyze and evaluate the control laws prior to testing. Lessons were learned about the complex windtunnel model and experimental testing. The open-loop flutter boundary was determined from the closed-loop systems. A MATLAB/Simulink Simulation developed under the program is available for future work to improve the CPE process. This paper is one of a series of that comprise a special session, which summarizes the S4T wind-tunnel program.

  17. Characteristics of Control Laws Tested on the Semi-Span Super-Sonic Transport (S4T) Wind-Tunnel Model

    Science.gov (United States)

    Christhilf, David M.; Moulin, Boris; Ritz, Erich; Chen, P. C.; Roughen, Kevin M.; Perry, Boyd

    2012-01-01

    The Semi-Span Supersonic Transport (S4T) is an aeroelastically scaled wind-tunnel model built to test active controls concepts for large flexible supersonic aircraft in the transonic flight regime. It is one of several models constructed in the 1990's as part of the High Speed Research (HSR) Program. Control laws were developed for the S4T by M4 Engineering, Inc. and by Zona Technologies, Inc. under NASA Research Announcement (NRA) contracts. The model was tested in the NASA-Langley Transonic Dynamics Tunnel (TDT) four times from 2007 to 2010. The first two tests were primarily for plant identification. The third entry was used for testing control laws for Ride Quality Enhancement, Gust Load Alleviation, and Flutter Suppression. Whereas the third entry only tested FS subcritically, the fourth test demonstrated closed-loop operation above the open-loop flutter boundary. The results of the third entry are reported elsewhere. This paper reports on flutter suppression results from the fourth wind-tunnel test. Flutter suppression is seen as a way to provide stability margins while flying at transonic flight conditions without penalizing the primary supersonic cruise design condition. An account is given for how Controller Performance Evaluation (CPE) singular value plots were interpreted with regard to progressing open- or closed-loop to higher dynamic pressures during testing.

  18. Modeling a neutron rich nuclei source

    Energy Technology Data Exchange (ETDEWEB)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J. [Institut de Physique Nucleaire, IN2P3/CNRS, 91 - Orsay (France); Mirea, M. [Institute of Physics and Nuclear Engineering, Tandem Lab., Bucharest (Romania)

    2000-07-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (author000.

  19. Modeling a neutron rich nuclei source

    International Nuclear Information System (INIS)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J.; Mirea, M.

    2000-01-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (authors)

  20. Data analysis and source modelling for LISA

    International Nuclear Information System (INIS)

    Shang, Yu

    2014-01-01

    The gravitational waves are one of the most important predictions in general relativity. Besides of the directly proof of the existence of GWs, there are already several ground based detectors (such as LIGO, GEO, etc) and the planed future space mission (such as: LISA) which are aim to detect the GWs directly. GW contain a large amount of information of its source, extracting these information can help us dig out the physical property of the source, even open a new window for understanding the Universe. Hence, GW data analysis will be a challenging task in seeking the GWs. In this thesis, I present two works about the data analysis for LISA. In the first work, we introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge. We have found all five sources present in the data and recovered the coalescence time, chirp mass, mass ratio and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the Black Holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values. The performance of this method is comparable, if not better, to already existing algorithms. In the second work, we introduce an new phenomenological waveform model for the extreme mass ratio inspiral system. This waveform consists of a set of harmonics with constant amplitude and slowly evolving phase which we decompose in a Taylor series. We use these phenomenological templates to detect the signal in the simulated data, and then, assuming a particular EMRI model, estimate the physical parameters of the binary with high precision. The results show that our phenomenological waveform is very feasible in the data analysis of EMRI signal.

  1. Visualizing Flutter Mechanism as Traveling Wave Through Animation of Simulation Results for the Semi-Span Super-Sonic Transport Wind-Tunnel Model

    Science.gov (United States)

    Christhilf, David M.

    2014-01-01

    It has long been recognized that frequency and phasing of structural modes in the presence of airflow play a fundamental role in the occurrence of flutter. Animation of simulation results for the long, slender Semi-Span Super-Sonic Transport (S4T) wind-tunnel model demonstrates that, for the case of mass-ballasted nacelles, the flutter mode can be described as a traveling wave propagating downstream. Such a characterization provides certain insights, such as (1) describing the means by which energy is transferred from the airflow to the structure, (2) identifying airspeed as an upper limit for speed of wave propagation, (3) providing an interpretation for a companion mode that coalesces in frequency with the flutter mode but becomes very well damped, (4) providing an explanation for bursts of response to uniform turbulence, and (5) providing an explanation for loss of low frequency (lead) phase margin with increases in dynamic pressure (at constant Mach number) for feedback systems that use sensors located upstream from active control surfaces. Results from simulation animation, simplified modeling, and wind-tunnel testing are presented for comparison. The simulation animation was generated using double time-integration in Simulink of vertical accelerometer signals distributed over wing and fuselage, along with time histories for actuated control surfaces. Crossing points for a zero-elevation reference plane were tracked along a network of lines connecting the accelerometer locations. Accelerometer signals were used in preference to modal displacement state variables in anticipation that the technique could be used to animate motion of the actual wind-tunnel model using data acquired during testing. Double integration of wind-tunnel accelerometer signals introduced severe drift even with removal of both position and rate biases such that the technique does not currently work. Using wind-tunnel data to drive a Kalman filter based upon fitting coefficients to

  2. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  3. Towards a Unified Source-Propagation Model of Cosmic Rays

    Science.gov (United States)

    Taylor, M.; Molla, M.

    2010-07-01

    It is well known that the cosmic ray energy spectrum is multifractal with the analysis of cosmic ray fluxes as a function of energy revealing a first “knee” slightly below 1016 eV, a second knee slightly below 1018 eV and an “ankle” close to 1019 eV. The behaviour of the highest energy cosmic rays around and above the ankle is still a mystery and precludes the development of a unified source-propagation model of cosmic rays from their source origin to Earth. A variety of acceleration and propagation mechanisms have been proposed to explain different parts of the spectrum the most famous of course being Fermi acceleration in magnetised turbulent plasmas (Fermi 1949). Many others have been proposd for energies at and below the first knee (Peters & Cimento (1961); Lagage & Cesarsky (1983); Drury et al. (1984); Wdowczyk & Wolfendale (1984); Ptuskin et al. (1993); Dova et al. (0000); Horandel et al. (2002); Axford (1991)) as well as at higher energies between the first knee and the ankle (Nagano & Watson (2000); Bhattacharjee & Sigl (2000); Malkov & Drury (2001)). The recent fit of most of the cosmic ray spectrum up to the ankle using non-extensive statistical mechanics (NESM) (Tsallis et al. (2003)) provides what may be the strongest evidence for a source-propagation system deviating significantly from Boltmann statistics. As Tsallis has shown (Tsallis et al. (2003)), the knees appear as crossovers between two fractal-like thermal regimes. In this work, we have developed a generalisation of the second order NESM model (Tsallis et al. (2003)) to higher orders and we have fit the complete spectrum including the ankle with third order NESM. We find that, towards the GDZ limit, a new mechanism comes into play. Surprisingly it also presents as a modulation akin to that in our own local neighbourhood of cosmic rays emitted by the sun. We propose that this is due to modulation at the source and is possibly due to processes in the shell of the originating supernova. We

  4. Integrated source-risk model for radon: A definition study

    International Nuclear Information System (INIS)

    Laheij, G.M.H.; Aldenkamp, F.J.; Stoop, P.

    1993-10-01

    The purpose of a source-risk model is to support policy making on radon mitigation by comparing effects of various policy options and to enable optimization of counter measures applied to different parts of the source-risk chain. There are several advantages developing and using a source-risk model: risk calculations are standardized; the effects of measures applied to different parts of the source-risk chain can be better compared because interactions are included; and sensitivity analyses can be used to determine the most important parameters within the total source-risk chain. After an inventory of processes and sources to be included in the source-risk chain, the models presently available in the Netherlands are investigated. The models were screened for completeness, validation and operational status. The investigation made clear that, by choosing for each part of the source-risk chain the most convenient model, a source-risk chain model for radon may be realized. However, the calculation of dose out of the radon concentrations and the status of the validation of most models should be improved. Calculations with the proposed source-risk model will give estimations with a large uncertainty at the moment. For further development of the source-risk model an interaction between the source-risk model and experimental research is recommended. Organisational forms of the source-risk model are discussed. A source-risk model in which only simple models are included is also recommended. The other models are operated and administrated by the model owners. The model owners execute their models for a combination of input parameters. The output of the models is stored in a database which will be used for calculations with the source-risk model. 5 figs., 15 tabs., 7 appendices, 14 refs

  5. Long Span Bridges in Scandinavia

    DEFF Research Database (Denmark)

    Gimsing, Niels Jørgen

    1998-01-01

    The first Scandinavian bridge with a span of more than 500 m was the Lillebælt Suspension Bridge opened to traffic in 1970.Art the end of the 20th century the longest span of any European bridge is found in the Storebælt East Bridge with a main span of 1624 m. Also the third longest span in Europe...... is found in Scandinavia - the 1210 m span of the Höga Kusten Bridge in Sweden.The Kvarnsund Bridge in Norway was at the completion in 1991 the longest cable-stayed bridge in the world, and the span of 530 m is still thge longest for cable-stayed bridges in concrete. The Øresund Bridge with its sapn of 490...

  6. An open source business model for malaria.

    Science.gov (United States)

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria

  7. An open source business model for malaria.

    Directory of Open Access Journals (Sweden)

    Christine Årdal

    Full Text Available Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related

  8. About Block Dynamic Model of Earthquake Source.

    Science.gov (United States)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising

  9. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  10. Heat source model for welding process

    International Nuclear Information System (INIS)

    Doan, D.D.

    2006-10-01

    One of the major industrial stakes of the welding simulation relates to the control of mechanical effects of the process (residual stress, distortions, fatigue strength... ). These effects are directly dependent on the temperature evolutions imposed during the welding process. To model this thermal loading, an original method is proposed instead of the usual methods like equivalent heat source approach or multi-physical approach. This method is based on the estimation of the weld pool shape together with the heat flux crossing the liquid/solid interface, from experimental data measured in the solid part. Its originality consists in solving an inverse Stefan problem specific to the welding process, and it is shown how to estimate the parameters of the weld pool shape. To solve the heat transfer problem, the interface liquid/solid is modeled by a Bezier curve ( 2-D) or a Bezier surface (3-D). This approach is well adapted to a wide diversity of weld pool shapes met for the majority of the current welding processes (TIG, MlG-MAG, Laser, FE, Hybrid). The number of parameters to be estimated is weak enough, according to the cases considered from 2 to 5 in 20 and 7 to 16 in 3D. A sensitivity study leads to specify the location of the sensors, their number and the set of measurements required to a good estimate. The application of the method on test results of welding TIG on thin stainless steel sheets in emerging and not emerging configurations, shows that only one measurement point is enough to estimate the various weld pool shapes in 20, and two points in 3D, whatever the penetration is full or not. In the last part of the work, a methodology is developed for the transient analysis. It is based on the Duvaut's transformation which overpasses the discontinuity of the liquid metal interface and therefore gives a continuous variable for the all spatial domain. Moreover, it allows to work on a fixed mesh grid and the new inverse problem is equivalent to identify a source

  11. WildSpan: mining structured motifs from protein sequences

    Directory of Open Access Journals (Sweden)

    Chen Chien-Yu

    2011-03-01

    Full Text Available Abstract Background Automatic extraction of motifs from biological sequences is an important research problem in study of molecular biology. For proteins, it is desired to discover sequence motifs containing a large number of wildcard symbols, as the residues associated with functional sites are usually largely separated in sequences. Discovering such patterns is time-consuming because abundant combinations exist when long gaps (a gap consists of one or more successive wildcards are considered. Mining algorithms often employ constraints to narrow down the search space in order to increase efficiency. However, improper constraint models might degrade the sensitivity and specificity of the motifs discovered by computational methods. We previously proposed a new constraint model to handle large wildcard regions for discovering functional motifs of proteins. The patterns that satisfy the proposed constraint model are called W-patterns. A W-pattern is a structured motif that groups motif symbols into pattern blocks interleaved with large irregular gaps. Considering large gaps reflects the fact that functional residues are not always from a single region of protein sequences, and restricting motif symbols into clusters corresponds to the observation that short motifs are frequently present within protein families. To efficiently discover W-patterns for large-scale sequence annotation and function prediction, this paper first formally introduces the problem to solve and proposes an algorithm named WildSpan (sequential pattern mining across large wildcard regions that incorporates several pruning strategies to largely reduce the mining cost. Results WildSpan is shown to efficiently find W-patterns containing conserved residues that are far separated in sequences. We conducted experiments with two mining strategies, protein-based and family-based mining, to evaluate the usefulness of W-patterns and performance of WildSpan. The protein-based mining mode

  12. Modelling of H.264 MPEG2 TS Traffic Source

    Directory of Open Access Journals (Sweden)

    Stanislav Klucik

    2013-01-01

    Full Text Available This paper deals with IPTV traffic source modelling. Traffic sources are used for simulation, emulation and real network testing. This model is made as a derivation of known recorded traffic sources that are analysed and statistically processed. As the results show the proposed model causes in comparison to the known traffic source very similar network traffic parameters when used in a simulated network.

  13. Age Differences in Memory Span

    Science.gov (United States)

    Chi, Michelene T. H.

    1977-01-01

    Three experiments were conducted to determine processes underlying age differences in the level of recall in a memory-span task. Five-year-olds recalled fewer items than adults in memory-span tasks involving both familiar and unfamiliar faces, even though the use of rehearsal and recoding strategies was minimized for adults. (MS)

  14. Computerized dosimetry of I-125 sources model 6711

    International Nuclear Information System (INIS)

    Isturiz, J.

    2001-01-01

    It tries on: physical presentation of the sources; radiation protection; mathematical model of I-125 source model 6711; data considered for the calculation program; experimental com probation of the dose distribution; exposure rate and apparent activity; techniques of the use given to the sources I-125; and the calculation planning systems [es

  15. Source Term Model for Fine Particle Resuspension from Indoor Surfaces

    National Research Council Canada - National Science Library

    Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W

    2008-01-01

    This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...

  16. Evaluating the efficiency of shortcut span protection

    DEFF Research Database (Denmark)

    Ruepp, Sarah Renée; Dittmann, Lars; Berger, Michael Stübert

    2010-01-01

    This paper presents a comparison of various recovery methods in terms of capacity efficiency with the underlying aim of reducing control plane load. In particular, a method where recovery requests are bundled towards the destination (Shortcut Span Protection) is evaluated can compared against tra...... traditional recovery methods. The optimization model is presented and our simulation results show that Shortcut Span Protection uses more capacity than the unbundled related methods, but this is compensated by easier control and management of the recovery actions.......This paper presents a comparison of various recovery methods in terms of capacity efficiency with the underlying aim of reducing control plane load. In particular, a method where recovery requests are bundled towards the destination (Shortcut Span Protection) is evaluated can compared against...

  17. Source model for the Copahue volcano magmaplumbing system constrained by InSARsurface deformation observations

    Science.gov (United States)

    Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.

    2017-12-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.

  18. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  19. Computational model of Amersham I-125 source model 6711 and Prosper Pd-103 source model MED3633 using MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Artur F.; Reis Junior, Juraci P.; Silva, Ademir X., E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Rosa, Luiz A.R. da, E-mail: lrosa@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Facure, Alessandro [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Cardoso, Simone C., E-mail: Simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear

    2011-07-01

    Brachytherapy is used in cancer treatment at shorter distances through the use of small encapsulated source of ionizing radiation. In such treatment, a radiation source is positioned directly into or near the target volume to be treated. In this study the Monte Carlo based MCNP code was used to model and simulate the I-125 Amersham Health source model 6711 and the Pd-103 Prospera source model MED3633 in order to obtain the dosimetric parameter dose rate constant ({Lambda}) . The sources geometries were modeled and implemented in MCNPX code. The dose rate constant is an important parameter prostate LDR brachytherapy's treatments planning. This study was based on American Association of Physicists in Medicine (AAPM) recommendations which were produced by its Task Group 43. The results obtained were 0.941 and 0.65 for the dose rate constants of I-125 and Pd-103 sources, respectively. They present good agreement with the literature values based on different Monte Carlo codes. (author)

  20. Studies and modeling of cold neutron sources

    International Nuclear Information System (INIS)

    Campioni, G.

    2004-11-01

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources

  1. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  2. Developmental Regulation across the Life Span: Toward a New Synthesis

    Science.gov (United States)

    Haase, Claudia M.; Heckhausen, Jutta; Wrosch, Carsten

    2013-01-01

    How can individuals regulate their own development to live happy, healthy, and productive lives? Major theories of developmental regulation across the life span have been proposed (e.g., dual-process model of assimilation and accommodation; motivational theory of life-span development; model of selection, optimization, and compensation), but they…

  3. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...

  4. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  5. Thermodynamics and life span estimation

    International Nuclear Information System (INIS)

    Kuddusi, Lütfullah

    2015-01-01

    In this study, the life span of people living in seven regions of Turkey is estimated by applying the first and second laws of thermodynamics to the human body. The people living in different regions of Turkey have different food habits. The first and second laws of thermodynamics are used to calculate the entropy generation rate per unit mass of a human due to the food habits. The lifetime entropy generation per unit mass of a human was previously found statistically. The two entropy generations, lifetime entropy generation and entropy generation rate, enable one to determine the life span of people living in seven regions of Turkey with different food habits. In order to estimate the life span, some statistics of Turkish Statistical Institute regarding the food habits of the people living in seven regions of Turkey are used. The life spans of people that live in Central Anatolia and Eastern Anatolia regions are the longest and shortest, respectively. Generally, the following inequality regarding the life span of people living in seven regions of Turkey is found: Eastern Anatolia < Southeast Anatolia < Black Sea < Mediterranean < Marmara < Aegean < Central Anatolia. - Highlights: • The first and second laws of thermodynamics are applied to the human body. • The entropy generation of a human due to his food habits is determined. • The life span of Turks is estimated by using the entropy generation method. • Food habits of a human have effect on his life span

  6. Transonic control effectiveness for full and partial span elevon configurations on a 0.0165 scale model space shuttle orbiter tested in the LaRC 8-foot transonic wind tunnel (LA48)

    Science.gov (United States)

    1977-01-01

    A transonic pressure tunnel test is reported on an early version of the space shuttle orbiter (designated 089B-139) 0.0165 scale model to systematically determine both longitudinal and lateral control effectiveness associated with various combinations of inboard, outboard, and full span wing trailing edge controls. The test was conducted over a Mach number range from 0.6 to 1.08 at angles of attack from -2 deg to 23 deg at 0 deg sideslip.

  7. Earthquake source model using strong motion displacement

    Indian Academy of Sciences (India)

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...

  8. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  9. Data Sources Available for Modeling Environmental Exposures in Older Adults

    Science.gov (United States)

    This report, “Data Sources Available for Modeling Environmental Exposures in Older Adults,” focuses on information sources and data available for modeling environmental exposures in the older U.S. population, defined here to be people 60 years and older, with an emphasis on those...

  10. Quasistatic modelling of the coaxial slow source

    International Nuclear Information System (INIS)

    Hahn, K.D.; Pietrzyk, Z.A.; Vlases, G.C.

    1986-01-01

    A new 1-D Lagrangian MHD numerical code in flux coordinates has been developed for the Coaxial Slow Source (CSS) geometry. It utilizes the quasistatic approximation so that the plasma evolves as a succession of equilibria. The P=P (psi) equilibrium constraint, along with the assumption of infinitely fast axial temperature relaxation on closed field lines, is incorporated. An axially elongated, rectangular plasma is assumed. The axial length is adjusted by the global average condition, or assumed to be fixed. In this paper predictions obtained with the code, and a limited amount of comparison with experimental data are presented

  11. Dmp53, basket and drICE gene knockdown and polyphenol gallic acid increase life span and locomotor activity in a Drosophila Parkinson's disease model

    Directory of Open Access Journals (Sweden)

    Hector Flavio Ortega-Arellano

    2013-01-01

    Full Text Available Understanding the mechanism(s by which dopaminergic (DAergic neurons are eroded in Parkinson's disease (PD is critical for effective therapeutic strategies. By using the binary tyrosine hydroxylase (TH-Gal4/UAS-X RNAi Drosophila melanogaster system, we report that Dmp53, basket and drICE gene knockdown in dopaminergic neurons prolong life span (p < 0.05; log-rank test and locomotor activity (p < 0.05; χ² test in D. melanogaster lines chronically exposed to (1 mM paraquat (PQ, oxidative stress (OS generator compared to untreated transgenic fly lines. Likewise, knockdown flies displayed higher climbing performance than control flies. Amazingly, gallic acid (GA significantly protected DAergic neurons, ameliorated life span, and climbing abilities in knockdown fly lines treated with PQ compared to flies treated with PQ only. Therefore, silencing specific gene(s involved in neuronal death might constitute an excellent tool to study the response of DAergic neurons to OS stimuli. We propose that a therapy with antioxidants and selectively "switching off" death genes in DAergic neurons could provide a means for pre-clinical PD individuals to significantly ameliorate their disease condition.

  12. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  13. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  14. Pressure distribution data from tests of 2.29 M (7.5 feet) span EET high-lift transport aircraft model in the Ames 12-foot pressure tunnel

    Science.gov (United States)

    Kjelgaard, S. O.; Morgan, H. L., Jr.

    1983-01-01

    A high-lift transport aircraft model equipped with full-span leading-edge slat and part-span double-slotted trailing-edge flap was tested in the Ames 12-ft pressure tunnel to determine the low-speed performance characteristics of a representative high-aspect-ratio supercritical wing. These tests were performed in support of the Energy Efficient Transport (EET) program which is one element of the Aircraft Energy Efficiency (ACEE) project. Static longitudinal forces and moments and chordwise pressure distributions at three spanwise stations were measured for cruise, climb, two take-off flap, and two landing flap wing configurations. The tabulated and plotted pressure distribution data is presented without analysis or discussion.

  15. Nuisance Source Population Modeling for Radiation Detection System Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P; Lange, D; Nelson, K; Wheeler, R

    2009-10-05

    A major challenge facing the prospective deployment of radiation detection systems for homeland security applications is the discrimination of radiological or nuclear 'threat sources' from radioactive, but benign, 'nuisance sources'. Common examples of such nuisance sources include naturally occurring radioactive material (NORM), medical patients who have received radioactive drugs for either diagnostics or treatment, and industrial sources. A sensitive detector that cannot distinguish between 'threat' and 'benign' classes will generate false positives which, if sufficiently frequent, will preclude it from being operationally deployed. In this report, we describe a first-principles physics-based modeling approach that is used to approximate the physical properties and corresponding gamma ray spectral signatures of real nuisance sources. Specific models are proposed for the three nuisance source classes - NORM, medical and industrial. The models can be validated against measured data - that is, energy spectra generated with the model can be compared to actual nuisance source data. We show by example how this is done for NORM and medical sources, using data sets obtained from spectroscopic detector deployments for cargo container screening and urban area traffic screening, respectively. In addition to capturing the range of radioactive signatures of individual nuisance sources, a nuisance source population model must generate sources with a frequency of occurrence consistent with that found in actual movement of goods and people. Measured radiation detection data can indicate these frequencies, but, at present, such data are available only for a very limited set of locations and time periods. In this report, we make more general estimates of frequencies for NORM and medical sources using a range of data sources such as shipping manifests and medical treatment statistics. We also identify potential data sources for industrial

  16. Modeling Group Interactions via Open Data Sources

    Science.gov (United States)

    2011-08-30

    data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ

  17. Nitrogen component in nonpoint source pollution models

    Science.gov (United States)

    Pollutants entering a water body can be very destructive to the health of that system. Best Management Practices (BMPs) and/or conservation practices are used to reduce these pollutants, but understanding the most effective practices is very difficult. Watershed models are an effective tool to aid...

  18. Application of source-receptor models to determine source areas of biological components (pollen and butterflies)

    OpenAIRE

    M. Alarcón; M. Àvila; J. Belmonte; C. Stefanescu; R. Izquierdo

    2010-01-01

    The source-receptor models allow the establishment of relationships between a receptor point (sampling point) and the probable source areas (regions of emission) through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured...

  19. A Distributed Spanning Tree Algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...

  20. A distributed spanning tree algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge

    1988-01-01

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well as comm...

  1. The optimum spanning catenary cable

    Science.gov (United States)

    Wang, C. Y.

    2015-03-01

    A heavy cable spans two points in space. There exists an optimum cable length such that the maximum tension is minimized. If the two end points are at the same level, the optimum length is 1.258 times the distance between the ends. The optimum lengths for end points of different heights are also found.

  2. A Motivational Theory of Life-Span Development

    Science.gov (United States)

    Heckhausen, Jutta; Wrosch, Carsten; Schulz, Richard

    2010-01-01

    This article had four goals. First, the authors identified a set of general challenges and questions that a life-span theory of development should address. Second, they presented a comprehensive account of their Motivational Theory of Life-Span Development. They integrated the model of optimization in primary and secondary control and the…

  3. The SPAN cookbook: A practical guide to accessing SPAN

    Science.gov (United States)

    Mason, Stephanie; Tencati, Ronald D.; Stern, David M.; Capps, Kimberly D.; Dorman, Gary; Peters, David J.

    1990-01-01

    This is a manual for remote users who wish to send electronic mail messages from the Space Physics Analysis Network (SPAN) to scientific colleagues on other computer networks and vice versa. In several instances more than one gateway has been included for the same network. Users are provided with an introduction to each network listed with helpful details about accessing the system and mail syntax examples. Also included is information on file transfers, remote logins, and help telephone numbers.

  4. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  5. Constraints on equivalent elastic source models from near-source data

    International Nuclear Information System (INIS)

    Stump, B.

    1993-01-01

    A phenomenological based seismic source model is important in quantifying the important physical processes that affect the observed seismic radiation in the linear-elastic regime. Representations such as these were used to assess yield effects on seismic waves under a Threshold Test Ban Treaty and to help transport seismic coupling experience at one test site to another. These same characterizations in a non-proliferation environment find applications in understanding the generation of the different types of body and surface waves from nuclear explosions, single chemical explosions, arrays of chemical explosions used in mining, rock bursts and earthquakes. Seismologists typically begin with an equivalent elastic representation of the source which when convolved with the propagation path effects produces a seismogram. The Representation Theorem replaces the true source with an equivalent set of body forces, boundary conditions or initial conditions. An extension of this representation shows the equivalence of the body forces, boundary conditions and initial conditions and replaces the source with a set of force moments, the first degree moment tensor for a point source representation. The difficulty with this formulation, which can completely describe the observed waveforms when the propagation path effects are known, is in the physical interpretation of the actual physical processes acting in the source volume. Observational data from within the source region, where processes are often nonlinear, linked to numerical models of the important physical processes in this region are critical to a unique physical understanding of the equivalent elastic source function

  6. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  7. Data Sources for NetZero Ft Carson Model

    Data.gov (United States)

    U.S. Environmental Protection Agency — Table of values used to parameterize and evaluate the Ft Carson NetZero integrated Model with published reference sources for each value. This dataset is associated...

  8. Near-Source Modeling Updates: Building Downwash & Near-Road

    Science.gov (United States)

    The presentation describes recent research efforts in near-source model development focusing on building downwash and near-road barriers. The building downwash section summarizes a recent wind tunnel study, ongoing computational fluid dynamics simulations and efforts to improve ...

  9. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  10. Spanning Tree Based Attribute Clustering

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Jorge, Cordero Hernandez

    2009-01-01

    Attribute clustering has been previously employed to detect statistical dependence between subsets of variables. We propose a novel attribute clustering algorithm motivated by research of complex networks, called the Star Discovery algorithm. The algorithm partitions and indirectly discards...... inconsistent edges from a maximum spanning tree by starting appropriate initial modes, therefore generating stable clusters. It discovers sound clusters through simple graph operations and achieves significant computational savings. We compare the Star Discovery algorithm against earlier attribute clustering...

  11. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  12. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  13. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  14. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  15. The Unfolding of Value Sources During Online Business Model Transformation

    Directory of Open Access Journals (Sweden)

    Nadja Hoßbach

    2016-12-01

    Full Text Available Purpose: In the magazine publishing industry, viable online business models are still rare to absent. To prepare for the ‘digital future’ and safeguard their long-term survival, many publishers are currently in the process of transforming their online business model. Against this backdrop, this study aims to develop a deeper understanding of (1 how the different building blocks of an online business model are transformed over time and (2 how sources of value creation unfold during this transformation process. Methodology: To answer our research question, we conducted a longitudinal case study with a leading German business magazine publisher (called BIZ. Data was triangulated from multiple sources including interviews, internal documents, and direct observations. Findings: Based on our case study, we nd that BIZ used the transformation process to differentiate its online business model from its traditional print business model along several dimensions, and that BIZ’s online business model changed from an efficiency- to a complementarity- to a novelty-based model during this process. Research implications: Our findings suggest that different business model transformation phases relate to different value sources, questioning the appropriateness of value source-based approaches for classifying business models. Practical implications: The results of our case study highlight the need for online-offline business model differentiation and point to the important distinction between service and product differentiation. Originality: Our study contributes to the business model literature by applying a dynamic and holistic perspective on the link between online business model changes and unfolding value sources.

  16. Modeling water demand when households have multiple sources of water

    Science.gov (United States)

    Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.

    2014-07-01

    A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.

  17. Full-Span Tiltrotor Aeroacoustic Model (TRAM) Overview and 40- by 80-Foot Wind Tunnel Test. [conducted in the 40- by 80-Foot Wind Tunnel at Ames Research Center

    Science.gov (United States)

    McCluer, Megan S.; Johnson, Jeffrey L.; Rutkowski, Michael (Technical Monitor)

    2001-01-01

    Most helicopter data trends cannot be extrapolated to tiltrotors because blade geometry and aerodynamic behavior, as well as rotor and fuselage interactions, are significantly different for tiltrotors. A tiltrotor model has been developed to investigate the aeromechanics of tiltrotors, to develop a comprehensive database for validating tiltrotor analyses, and to provide a research platform for supporting future tiltrotor designs. The Full-Span Tiltrotor Aeroacoustic Model (FS TRAM) is a dual-rotor, powered aircraft model with extensive instrumentation for measurement of structural and aerodynamic loads. This paper will present the Full-Span TRAM test capabilities and the first set of data obtained during a 40- by 80-Foot Wind Tunnel test conducted in late 2000 at NASA Ames Research Center. The Full-Span TRAM is a quarter-scale representation of the V-22 Osprey aircraft, and a heavily instrumented NASA and U.S. Army wind tunnel test stand. Rotor structural loads are monitored and recorded for safety-of-flight and for information on blade loads and dynamics. Left and right rotor balance and fuselage balance loads are monitored for safety-of-flight and for measurement of vehicle and rotor aerodynamic performance. Static pressure taps on the left wing are used to determine rotor/wing interactional effects and rotor blade dynamic pressures measure blade airloads. All of these measurement capabilities make the FS TRAM test stand a unique and valuable asset for validation of computational codes and to aid in future tiltrotor designs. The Full-Span TRAM was tested in the NASA Ames Research Center 40- by 80-Foot Wind Tunnel from October through December 2000. Rotor and vehicle performance measurements were acquired in addition to wing pressures, rotor acoustics, and Laser Light Sheet (LLS) flow visualization data. Hover, forward flight, and airframe (rotors off) aerodynamic runs were performed. Helicopter-mode data were acquired during angle of attack and thrust sweeps for

  18. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  19. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  20. MCNP model for the many KE-Basin radiation sources

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1997-01-01

    This document presents a model for the location and strength of radiation sources in the accessible areas of KE-Basin which agrees well with data taken on a regular grid in September of 1996. This modelling work was requested to support dose rate reduction efforts in KE-Basin. Anticipated fuel removal activities require lower dose rates to minimize annual dose to workers. With this model, the effects of component cleanup or removal can be estimated in advance to evaluate their effectiveness. In addition, the sources contributing most to the radiation fields in a given location can be identified and dealt with

  1. Open source data assimilation framework for hydrological modeling

    Science.gov (United States)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent

  2. Linking and Cutting Spanning Trees

    Directory of Open Access Journals (Sweden)

    Luís M. S. Russo

    2018-04-01

    Full Text Available We consider the problem of uniformly generating a spanning tree for an undirected connected graph. This process is useful for computing statistics, namely for phylogenetic trees. We describe a Markov chain for producing these trees. For cycle graphs, we prove that this approach significantly outperforms existing algorithms. For general graphs, experimental results show that the chain converges quickly. This yields an efficient algorithm due to the use of proper fast data structures. To obtain the mixing time of the chain we describe a coupling, which we analyze for cycle graphs and simulate for other graphs.

  3. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    Science.gov (United States)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general

  4. A recalculation of the dose-effect-relationship of the ''life span study'' of Hiroshima and Nagasaki with the ''single-hit model''

    International Nuclear Information System (INIS)

    Kottbauer, M.M.; Fleck, C.M.; Schoellnberger, H.

    1996-01-01

    The basis of this new model is the multistage process of carcinogeneses. The Single-Hit Model is a further development of the Armitage-Doll Model [1] for the special case of a short exposure. It provides simultaneously the age-dependent mortality-rate (incidence-rate) of the spontaneous and radiation induced solid tumors and dose-effect relationships at any age after exposure. The model results in a biologically based dose-effect relationship, which is similar to the Relativ-Risk-Model suggested by the ICRP 60 [2]. The present model is able to describe the increased mortality rate of the bomb survivors more accurate than the Relativ-Risk-Model. (orig.) [de

  5. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  6. Open Sourcing Social Change: Inside the Constellation Model

    OpenAIRE

    Tonya Surman; Mark Surman

    2008-01-01

    The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a ...

  7. White Dwarf Model Atmospheres: Synthetic Spectra for Super Soft Sources

    OpenAIRE

    Rauch, Thomas

    2011-01-01

    The T\\"ubingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and super soft sources.

  8. White Dwarf Model Atmospheres: Synthetic Spectra for Supersoft Sources

    Science.gov (United States)

    Rauch, Thomas

    2013-01-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and supersoft sources.

  9. Extended nonnegative tensor factorisation models for musical sound source separation.

    Science.gov (United States)

    FitzGerald, Derry; Cranitch, Matt; Coyle, Eugene

    2008-01-01

    Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  10. Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Directory of Open Access Journals (Sweden)

    Derry FitzGerald

    2008-01-01

    Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  11. Monitoring alert and drowsy states by modeling EEG source nonstationarity

    Science.gov (United States)

    Hsu, Sheng-Hsiou; Jung, Tzyy-Ping

    2017-10-01

    Objective. As a human brain performs various cognitive functions within ever-changing environments, states of the brain characterized by recorded brain activities such as electroencephalogram (EEG) are inevitably nonstationary. The challenges of analyzing the nonstationary EEG signals include finding neurocognitive sources that underlie different brain states and using EEG data to quantitatively assess the state changes. Approach. This study hypothesizes that brain activities under different states, e.g. levels of alertness, can be modeled as distinct compositions of statistically independent sources using independent component analysis (ICA). This study presents a framework to quantitatively assess the EEG source nonstationarity and estimate levels of alertness. The framework was tested against EEG data collected from 10 subjects performing a sustained-attention task in a driving simulator. Main results. Empirical results illustrate that EEG signals under alert versus drowsy states, indexed by reaction speeds to driving challenges, can be characterized by distinct ICA models. By quantifying the goodness-of-fit of each ICA model to the EEG data using the model deviation index (MDI), we found that MDIs were significantly correlated with the reaction speeds (r  =  -0.390 with alertness models and r  =  0.449 with drowsiness models) and the opposite correlations indicated that the two models accounted for sources in the alert and drowsy states, respectively. Based on the observed source nonstationarity, this study also proposes an online framework using a subject-specific ICA model trained with an initial (alert) state to track the level of alertness. For classification of alert against drowsy states, the proposed online framework achieved an averaged area-under-curve of 0.745 and compared favorably with a classic power-based approach. Significance. This ICA-based framework provides a new way to study changes of brain states and can be applied to

  12. Time-dependent source model of the Lusi mud volcano

    Science.gov (United States)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  13. Detailed free span assessment for Mexilhao flow lines

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Antonio; Franco, Luciano; Eigbe, Uwa; BomfimSilva, Carlos [INTECSEA, Houston, TX (United States); Escudero, Carlos [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    The subsea gas production system of Mexilhao Field SPS-35, Santos Basin, offshore Brazil, is composed basically of two rigid 12.75 inches production flow lines approximately 21 km long installed in a fairly rough seabed. During the basic design, the free span assessment was performed considering the maximum allowable free span length determined by the response model proposed by DNV-RP-F105. This approach resulted in a large number of predicted free span requiring corrections, leading to a higher capital cost for the project. In this sense, a detailed free span VIV fatigue assessment was proposed, considering multi-spans and multi-mode effects and also the post lay survey data. The assessment followed the DNV-RP-F105 recommendations for multi-spans and multi-mode effects, using Finite Element Analysis to determine the natural frequencies, mode shapes and corresponding stresses associated with the mode shapes. The assessment was performed in three stages, the first during the detailed design as part of the bottom roughness analysis using the expected residual pipelay tension. The second stage was performed after pipelay, considering the post-lay survey data, where the actual requirements for span correction were determined. Actual pipelay tension was used and seabed soil stiffness adjusted in the model to match the as-laid pipeline profile obtained from the survey data. The first and second stage assessments are seamlessly automated to speed up the evaluation process and allow for quick response in the field, which was important to keep the construction vessel time minimized. The third stage was performed once the corrections of the spans were made and the purpose was to confirm that the new pipeline configuration along the supported spans had sufficient fatigue life for the temporary and operational phases. For the assessment of all three stages, the probability of occurrence and directionality of the near bottom current was considered to improve prediction of the

  14. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  15. Retrieving global aerosol sources from satellites using inverse modeling

    Directory of Open Access Journals (Sweden)

    O. Dubovik

    2008-01-01

    Full Text Available Understanding aerosol effects on global climate requires knowing the global distribution of tropospheric aerosols. By accounting for aerosol sources, transports, and removal processes, chemical transport models simulate the global aerosol distribution using archived meteorological fields. We develop an algorithm for retrieving global aerosol sources from satellite observations of aerosol distribution by inverting the GOCART aerosol transport model.

    The inversion is based on a generalized, multi-term least-squares-type fitting, allowing flexible selection and refinement of a priori algorithm constraints. For example, limitations can be placed on retrieved quantity partial derivatives, to constrain global aerosol emission space and time variability in the results. Similarities and differences between commonly used inverse modeling and remote sensing techniques are analyzed. To retain the high space and time resolution of long-period, global observational records, the algorithm is expressed using adjoint operators.

    Successful global aerosol emission retrievals at 2°×2.5 resolution were obtained by inverting GOCART aerosol transport model output, assuming constant emissions over the diurnal cycle, and neglecting aerosol compositional differences. In addition, fine and coarse mode aerosol emission sources were inverted separately from MODIS fine and coarse mode aerosol optical thickness data, respectively. These assumptions are justified, based on observational coverage and accuracy limitations, producing valuable aerosol source locations and emission strengths. From two weeks of daily MODIS observations during August 2000, the global placement of fine mode aerosol sources agreed with available independent knowledge, even though the inverse method did not use any a priori information about aerosol sources, and was initialized with a "zero aerosol emission" assumption. Retrieving coarse mode aerosol emissions was less successful

  16. Low-level radioactive waste performance assessments: Source term modeling

    International Nuclear Information System (INIS)

    Icenhour, A.S.; Godbee, H.W.; Miller, L.F.

    1995-01-01

    Low-level radioactive wastes (LLW) generated by government and commercial operations need to be isolated from the environment for at least 300 to 500 yr. Most existing sites for the storage or disposal of LLW employ the shallow-land burial approach. However, the U.S. Department of Energy currently emphasizes the use of engineered systems (e.g., packaging, concrete and metal barriers, and water collection systems). Future commercial LLW disposal sites may include such systems to mitigate radionuclide transport through the biosphere. Performance assessments must be conducted for LUW disposal facilities. These studies include comprehensive evaluations of radionuclide migration from the waste package, through the vadose zone, and within the water table. Atmospheric transport mechanisms are also studied. Figure I illustrates the performance assessment process. Estimates of the release of radionuclides from the waste packages (i.e., source terms) are used for subsequent hydrogeologic calculations required by a performance assessment. Computer models are typically used to describe the complex interactions of water with LLW and to determine the transport of radionuclides. Several commonly used computer programs for evaluating source terms include GWSCREEN, BLT (Breach-Leach-Transport), DUST (Disposal Unit Source Term), BARRIER (Ref. 5), as well as SOURCE1 and SOURCE2 (which are used in this study). The SOURCE1 and SOURCE2 codes were prepared by Rogers and Associates Engineering Corporation for the Oak Ridge National Laboratory (ORNL). SOURCE1 is designed for tumulus-type facilities, and SOURCE2 is tailored for silo, well-in-silo, and trench-type disposal facilities. This paper focuses on the source term for ORNL disposal facilities, and it describes improved computational methods for determining radionuclide transport from waste packages

  17. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  18. Topographic filtering simulation model for sediment source apportionment

    Science.gov (United States)

    Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin

    2018-05-01

    We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.

  19. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  20. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  1. Determinantal spanning forests on planar graphs

    OpenAIRE

    Kenyon, Richard

    2017-01-01

    We generalize the uniform spanning tree to construct a family of determinantal measures on essential spanning forests on periodic planar graphs in which every component tree is bi-infinite. Like the uniform spanning tree, these measures arise naturally from the laplacian on the graph. More generally these results hold for the "massive" laplacian determinant which counts rooted spanning forests with weight $M$ per finite component. These measures typically have a form of conformal invariance, ...

  2. The Growth of open source: A look at how companies are utilizing open source software in their business models

    OpenAIRE

    Feare, David

    2009-01-01

    This paper examines how open source software is being incorporated into the business models of companies in the software industry. The goal is to answer the question of whether the open source model can help sustain economic growth. While some companies are able to maintain a "pure" open source approach with their business model, the reality is that most companies are relying on proprietary add-on value in order to generate revenue because open source itself is simply not big business. Ultima...

  3. The Problem of Predecessors on Spanning Trees

    Directory of Open Access Journals (Sweden)

    V. S. Poghosyan

    2011-01-01

    Full Text Available We consider the equiprobable distribution of spanning trees on the square lattice. All bonds of each tree can be oriented uniquely with respect to an arbitrary chosen site called the root. The problem of predecessors is to find the probability that a path along the oriented bonds passes sequentially fixed sites i and j. The conformal field theory for the Potts model predicts the fractal dimension of the path to be 5/4. Using this result, we show that the probability in the predecessors problem for two sites separated by large distance r decreases as P(r ∼ r −3/4. If sites i and j are nearest neighbors on the square lattice, the probability P(1 = 5/16 can be found from the analytical theory developed for the sandpile model. The known equivalence between the loop erased random walk (LERW and the directed path on the spanning tree states that P(1 is the probability for the LERW started at i to reach the neighboring site j. By analogy with the self-avoiding walk, P(1 can be called the return probability. Extensive Monte-Carlo simulations confirm the theoretical predictions.

  4. Mitigating Spreadsheet Model Risk with Python Open Source Infrastructure

    OpenAIRE

    Beavers, Oliver

    2018-01-01

    Across an aggregation of EuSpRIG presentation papers, two maxims hold true: spreadsheets models are akin to software, yet spreadsheet developers are not software engineers. As such, the lack of traditional software engineering tools and protocols invites a higher rate of error in the end result. This paper lays ground work for spreadsheet modelling professionals to develop reproducible audit tools using freely available, open source packages built with the Python programming language, enablin...

  5. OSeMOSYS: The Open Source Energy Modeling System

    International Nuclear Information System (INIS)

    Howells, Mark; Rogner, Holger; Strachan, Neil; Heaps, Charles; Huntington, Hillard; Kypreos, Socrates; Hughes, Alison; Silveira, Semida; DeCarolis, Joe; Bazillian, Morgan; Roehrl, Alexander

    2011-01-01

    This paper discusses the design and development of the Open Source Energy Modeling System (OSeMOSYS). It describes the model's formulation in terms of a 'plain English' description, algebraic formulation, implementation-in terms of its full source code, as well as a detailed description of the model inputs, parameters, and outputs. A key feature of the OSeMOSYS implementation is that it is contained in less than five pages of documented, easily accessible code. Other existing energy system models that do not have this emphasis on compactness and openness makes the barrier to entry by new users much higher, as well as making the addition of innovative new functionality very difficult. The paper begins by describing the rationale for the development of OSeMOSYS and its structure. The current preliminary implementation of the model is then demonstrated for a discrete example. Next, we explain how new development efforts will build on the existing OSeMOSYS codebase. The paper closes with thoughts regarding the organization of the OSeMOSYS community, associated capacity development efforts, and linkages to other open source efforts including adding functionality to the LEAP model. - Highlights: → OSeMOSYS is a new free and open source energy systems. → This model is written in a simple, open, flexible and transparent manner to support teaching. → OSeMOSYS is based on free software and optimizes using a free solver. → This model replicates the results of many popular tools, such as MARKAL. → A link between OSeMOSYS and LEAP has been developed.

  6. MODEL OF A PERSONWALKING AS A STRUCTURE BORNE SOUND SOURCE

    DEFF Research Database (Denmark)

    Lievens, Matthias; Brunskog, Jonas

    2007-01-01

    has to be considered and the contact history must be integrated in the model. This is complicated by the fact that nonlinearities occur at different stages in the system either on the source or receiver side. ot only lightweight structures but also soft floor coverings would benefit from an accurate...

  7. Modeling Noise Sources and Propagation in External Gear Pumps

    Directory of Open Access Journals (Sweden)

    Sangbeom Woo

    2017-07-01

    Full Text Available As a key component in power transfer, positive displacement machines often represent the major source of noise in hydraulic systems. Thus, investigation into the sources of noise and discovering strategies to reduce noise is a key part of improving the performance of current hydraulic systems, as well as applying fluid power systems to a wider range of applications. The present work aims at developing modeling techniques on the topic of noise generation caused by external gear pumps for high pressure applications, which can be useful and effective in investigating the interaction between noise sources and radiated noise and establishing the design guide for a quiet pump. In particular, this study classifies the internal noise sources into four types of effective load functions and, in the proposed model, these load functions are applied to the corresponding areas of the pump case in a realistic way. Vibration and sound radiation can then be predicted using a combined finite element and boundary element vibro-acoustic model. The radiated sound power and sound pressure for the different operating conditions are presented as the main outcomes of the acoustic model. The noise prediction was validated through comparison with the experimentally measured sound power levels.

  8. Modeling of an autonomous microgrid for renewable energy sources integration

    DEFF Research Database (Denmark)

    Serban, I.; Teodorescu, Remus; Guerrero, Josep M.

    2009-01-01

    The frequency stability analysis in an autonomous microgrid (MG) with renewable energy sources (RES) is a continuously studied issue. This paper presents an original method for modeling an autonomous MG with a battery energy storage system (BESS) and a wind power plant (WPP), with the purpose...

  9. Got yoga?: A longitudinal analysis of thematic content and models' appearance-related attributes in advertisements spanning four decades of Yoga Journal.

    Science.gov (United States)

    Vinoski, Erin; Webb, Jennifer B; Warren-Findlow, Jan; Brewer, Kirstyn A; Kiffmeyer, Katheryn A

    2017-06-01

    Yoga has become an increasingly common health practice among U.S. adults over the past decade. With this growth in popularity, yoga-related print media have been criticized for shifting away from yoga's traditional philosophies and promoting a thin, lean ideal physique representing the "yoga body." The purpose of this study was to (a) analyze the presence and content of advertisements over the 40-year publication history of Yoga Journal magazine and (b) explore female advertisement models' socio-demographic and appearance-related attributes over time. Results suggested that Yoga Journal now contains significantly more advertisements for food, nutritional supplements, and apparel and fewer advertisements for meditation and nutritional practices than in its early years of publication. Models were more frequently rated as White and in their 20s and 30s in recent years of publication. Trends in model body size matched shifts in culturally dominant body ideals over time. Implications and future research directions are considered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Modeling Secondary Organic Aerosol Formation From Emissions of Combustion Sources

    Science.gov (United States)

    Jathar, Shantanu Hemant

    Atmospheric aerosols exert a large influence on the Earth's climate and cause adverse public health effects, reduced visibility and material degradation. Secondary organic aerosol (SOA), defined as the aerosol mass arising from the oxidation products of gas-phase organic species, accounts for a significant fraction of the submicron atmospheric aerosol mass. Yet, there are large uncertainties surrounding the sources, atmospheric evolution and properties of SOA. This thesis combines laboratory experiments, extensive data analysis and global modeling to investigate the contribution of semi-volatile and intermediate volatility organic compounds (SVOC and IVOC) from combustion sources to SOA formation. The goals are to quantify the contribution of these emissions to ambient PM and to evaluate and improve models to simulate its formation. To create a database for model development and evaluation, a series of smog chamber experiments were conducted on evaporated fuel, which served as surrogates for real-world combustion emissions. Diesel formed the most SOA followed by conventional jet fuel / jet fuel derived from natural gas, gasoline and jet fuel derived from coal. The variability in SOA formation from actual combustion emissions can be partially explained by the composition of the fuel. Several models were developed and tested along with existing models using SOA data from smog chamber experiments conducted using evaporated fuel (this work, gasoline, fischertropschs, jet fuel, diesels) and published data on dilute combustion emissions (aircraft, on- and off-road gasoline, on- and off-road diesel, wood burning, biomass burning). For all of the SOA data, existing models under-predicted SOA formation if SVOC/IVOC were not included. For the evaporated fuel experiments, when SVOC/IVOC were included predictions using the existing SOA model were brought to within a factor of two of measurements with minor adjustments to model parameterizations. Further, a volatility

  11. Source modelling in seismic risk analysis for nuclear power plants

    International Nuclear Information System (INIS)

    Yucemen, M.S.

    1978-12-01

    The proposed probabilistic procedure provides a consistent method for the modelling, analysis and updating of uncertainties that are involved in the seismic risk analysis for nuclear power plants. The potential earthquake activity zones are idealized as point, line or area sources. For these seismic source types, expressions to evaluate their contribution to seismic risk are derived, considering all the possible site-source configurations. The seismic risk at a site is found to depend not only on the inherent randomness of the earthquake occurrences with respect to magnitude, time and space, but also on the uncertainties associated with the predicted values of the seismic and geometric parameters, as well as the uncertainty in the attenuation model. The uncertainty due to the attenuation equation is incorporated into the analysis through the use of random correction factors. The influence of the uncertainty resulting from the insufficient information on the seismic parameters and source geometry is introduced into the analysis by computing a mean risk curve averaged over the various alternative assumptions on the parameters and source geometry. Seismic risk analysis is carried for the city of Denizli, which is located in the seismically most active zone of Turkey. The second analysis is for Akkuyu

  12. Race of source effects in the elaboration likelihood model.

    Science.gov (United States)

    White, P H; Harkins, S G

    1994-11-01

    In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.

  13. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  14. Absorptivity Measurements and Heat Source Modeling to Simulate Laser Cladding

    Science.gov (United States)

    Wirth, Florian; Eisenbarth, Daniel; Wegener, Konrad

    The laser cladding process gains importance, as it does not only allow the application of surface coatings, but also additive manufacturing of three-dimensional parts. In both cases, process simulation can contribute to process optimization. Heat source modeling is one of the main issues for an accurate model and simulation of the laser cladding process. While the laser beam intensity distribution is readily known, the other two main effects on the process' heat input are non-trivial. Namely the measurement of the absorptivity of the applied materials as well as the powder attenuation. Therefore, calorimetry measurements were carried out. The measurement method and the measurement results for laser cladding of Stellite 6 on structural steel S 235 and for the processing of Inconel 625 are presented both using a CO2 laser as well as a high power diode laser (HPDL). Additionally, a heat source model is deduced.

  15. Diffusion theory model for optimization calculations of cold neutron sources

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1987-01-01

    Cold neutron sources are becoming increasingly important and common experimental facilities made available at many research reactors around the world due to the high utility of cold neutrons in scattering experiments. The authors describe a simple two-group diffusion model of an infinite slab LD 2 cold source. The simplicity of the model permits to obtain an analytical solution from which one can deduce the reason for the optimum thickness based solely on diffusion-type phenomena. Also, a second more sophisticated model is described and the results compared to a deterministic transport calculation. The good (particularly qualitative) agreement between the results suggests that diffusion theory methods can be used in parametric and optimization studies to avoid the generally more expensive transport calculations

  16. Residential radon in Finland: sources, variation, modelling and dose comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Arvela, H

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.).

  17. Residential radon in Finland: sources, variation, modelling and dose comparisons

    International Nuclear Information System (INIS)

    Arvela, H.

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.)

  18. Dynamic modeling of the advanced neutron source reactor

    International Nuclear Information System (INIS)

    March-Leuba, J.; Ibn-Khayat, M.

    1990-01-01

    The purpose of this paper is to provide a summary description and some applications of a computer model that has been developed to simulate the dynamic behavior of the advanced neutron source (ANS) reactor. The ANS dynamic model is coded in the advanced continuous simulation language (ACSL), and it represents the reactor core, vessel, primary cooling system, and secondary cooling systems. The use of a simple dynamic model in the early stages of the reactor design has proven very valuable not only in the development of the control and plant protection system but also of components such as pumps and heat exchangers that are usually sized based on steady-state calculations

  19. Mapping the developmental constraints on working memory span performance.

    Science.gov (United States)

    Bayliss, Donna M; Jarrold, Christopher; Baddeley, Alan D; Gunn, Deborah M; Leigh, Eleanor

    2005-07-01

    This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related variance in complex span. Results showed that developmental improvements in complex span were driven by 2 age-related but separable factors: 1 associated with general speed of processing and 1 associated with storage ability. In addition, there was an age-related contribution shared between working memory, processing speed, and storage ability that was important for higher level cognition. These results pose a challenge for models of complex span performance that emphasize the importance of processing speed alone.

  20. Extended attention span training system

    Science.gov (United States)

    Pope, Alan T.; Bogart, Edward H.

    1991-01-01

    Attention Deficit Disorder (ADD) is a behavioral disorder characterized by the inability to sustain attention long enough to perform activities such as schoolwork or organized play. Treatments for this disorder include medication and brainwave biofeedback training. Brainwave biofeedback training systems feed back information to the trainee showing him how well he is producing the brainwave pattern that indicates attention. The Extended Attention Span Training (EAST) system takes the concept a step further by making a video game more difficult as the player's brainwaves indicate that attention is waning. The trainee can succeed at the game only by maintaining an adequate level of attention. The EAST system is a modification of a biocybernetic system that is currently being used to assess the extent to which automated flight management systems maintain pilot engagement. This biocybernetic system is a product of a program aimed at developing methods to evaluate automated flight deck designs for compatibility with human capabilities. The EAST technology can make a contribution in the fields of medical neuropsychology and neurology, where the emphasis is on cautious, conservative treatment of youngsters with attention disorders.

  1. Phonological similarity effect in complex span task.

    Science.gov (United States)

    Camos, Valérie; Mora, Gérôme; Barrouillet, Pierre

    2013-01-01

    The aim of our study was to test the hypothesis that two systems are involved in verbal working memory; one is specifically dedicated to the maintenance of phonological representations through verbal rehearsal while the other would maintain multimodal representations through attentional refreshing. This theoretical framework predicts that phonologically related phenomena such as the phonological similarity effect (PSE) should occur when the domain-specific system is involved in maintenance, but should disappear when concurrent articulation hinders its use. Impeding maintenance in the domain-general system by a concurrent attentional demand should impair recall performance without affecting PSE. In three experiments, we manipulated the concurrent articulation and the attentional demand induced by the processing component of complex span tasks in which participants had to maintain lists of either similar or dissimilar words. Confirming our predictions, PSE affected recall performance in complex span tasks. Although both the attentional demand and the articulatory requirement of the concurrent task impaired recall, only the induction of an articulatory suppression during maintenance made the PSE disappear. These results suggest a duality in the systems devoted to verbal maintenance in the short term, constraining models of working memory.

  2. Numerical model of electron cyclotron resonance ion source

    Directory of Open Access Journals (Sweden)

    V. Mironov

    2015-12-01

    Full Text Available Important features of the electron cyclotron resonance ion source (ECRIS operation are accurately reproduced with a numerical code. The code uses the particle-in-cell technique to model the dynamics of ions in ECRIS plasma. It is shown that a gas dynamical ion confinement mechanism is sufficient to provide the ion production rates in ECRIS close to the experimentally observed values. Extracted ion currents are calculated and compared to the experiment for a few sources. Changes in the simulated extracted ion currents are obtained with varying the gas flow into the source chamber and the microwave power. Empirical scaling laws for ECRIS design are studied and the underlying physical effects are discussed.

  3. Mathematical modelling of electricity market with renewable energy sources

    International Nuclear Information System (INIS)

    Marchenko, O.V.

    2007-01-01

    The paper addresses the electricity market with conventional energy sources on fossil fuel and non-conventional renewable energy sources (RESs) with stochastic operating conditions. A mathematical model of long-run (accounting for development of generation capacities) equilibrium in the market is constructed. The problem of determining optimal parameters providing the maximum social criterion of efficiency is also formulated. The calculations performed have shown that the adequate choice of price cap, environmental tax, subsidies to RESs and consumption tax make it possible to take into account external effects (environmental damage) and to create incentives for investors to construct conventional and renewable energy sources in an optimal (from the society view point) mix. (author)

  4. A FRAMEWORK FOR AN OPEN SOURCE GEOSPATIAL CERTIFICATION MODEL

    Directory of Open Access Journals (Sweden)

    T. U. R. Khan

    2016-06-01

    Full Text Available The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission “Making geospatial education and opportunities accessible to all”. Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the “Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM. The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and

  5. a Framework for AN Open Source Geospatial Certification Model

    Science.gov (United States)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  6. Modeling a Hypothetical 170Tm Source for Brachytherapy Applications

    International Nuclear Information System (INIS)

    Enger, Shirin A.; D'Amours, Michel; Beaulieu, Luc

    2011-01-01

    Purpose: To perform absorbed dose calculations based on Monte Carlo simulations for a hypothetical 170 Tm source and to investigate the influence of encapsulating material on the energy spectrum of the emitted electrons and photons. Methods: GEANT4 Monte Carlo code version 9.2 patch 2 was used to simulate the decay process of 170 Tm and to calculate the absorbed dose distribution using the GEANT4 Penelope physics models. A hypothetical 170 Tm source based on the Flexisource brachytherapy design with the active core set as a pure thulium cylinder (length 3.5 mm and diameter 0.6 mm) and different cylindrical source encapsulations (length 5 mm and thickness 0.125 mm) constructed of titanium, stainless-steel, gold, or platinum were simulated. The radial dose function for the line source approximation was calculated following the TG-43U1 formalism for the stainless-steel encapsulation. Results: For the titanium and stainless-steel encapsulation, 94% of the total bremsstrahlung is produced inside the core, 4.8 and 5.5% in titanium and stainless-steel capsules, respectively, and less than 1% in water. For the gold capsule, 85% is produced inside the core, 14.2% inside the gold capsule, and a negligible amount ( 170 Tm source is primarily a bremsstrahlung source, with the majority of bremsstrahlung photons being generated in the source core and experiencing little attenuation in the source encapsulation. Electrons are efficiently absorbed by the gold and platinum encapsulations. However, for the stainless-steel capsule (or other lower Z encapsulations) electrons will escape. The dose from these electrons is dominant over the photon dose in the first few millimeter but is not taken into account by current standard treatment planning systems. The total energy spectrum of photons emerging from the source depends on the encapsulation composition and results in mean photon energies well above 100 keV. This is higher than the main gamma-ray energy peak at 84 keV. Based on our

  7. Open Sourcing Social Change: Inside the Constellation Model

    Directory of Open Access Journals (Sweden)

    Tonya Surman

    2008-09-01

    Full Text Available The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a partnership. These constellations are outwardly focused, placing their attention on creating value for those in the external environment rather than on the partnership itself. While serious effort is invested into core partnership governance and management, most of the energy is devoted to the decision making, resources and collaborative effort required to create social value. The constellations drive and define the partnership. The constellation model emerged from a deep understanding of the power of networks and peer production. Leadership rotates fluidly amongst partners, with each partner having the freedom to head up a constellation and to participate in constellations that carry out activities that are of more peripheral interest. The Internet provided the platform, the partner network enabled the expertise to align itself, and the goal of reducing chemical exposure in children kept the energy flowing. Building on seven years of experience, this article provides an overview of the constellation model, discusses the results from the CPCHE, and identifies similarities and differences between the constellation and open source models.

  8. Model of the Sgr B2 radio source

    International Nuclear Information System (INIS)

    Gosachinskij, I.V.; Khersonskij, V.K.

    1981-01-01

    The dynamical model of the gas cloud around the radio source Sagittarius B2 is suggested. This model describes the kinematic features of the gas in this source: contraction of the core and rotation of the envelope. The stability of the cloud at the initial stage is supported by the turbulent motion of the gas, turbulence energy dissipates due to magnetic viscosity. This process is occurring more rapidly in the dense core and the core begins to collapse but the envelope remains stable. The parameters of the primary cloud and some parameters (mass, density and size) of the collapse are calculated. The conditions in the core at the moment of its fragmentation into masses of stellar order are established [ru

  9. Nitrate source apportionment in a subtropical watershed using Bayesian model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Shi, Jiachun, E-mail: jcshi@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Wu, Laosheng, E-mail: laowu@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Jiang, Yonghai [State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing, 100012 (China)

    2013-10-01

    Nitrate (NO{sub 3}{sup −}) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO{sub 3}{sup −} concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L{sup −1}) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L{sup −1}). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L{sup −1} NO{sub 3}{sup −}. Four sources of NO{sub 3}{sup −} (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl{sup −}, NO{sub 3}{sup −}, HCO{sub 3}{sup −}, SO{sub 4}{sup 2−}, Ca{sup 2+}, K{sup +}, Mg{sup 2+}, Na{sup +}, dissolved oxygen (DO)] and dual isotope approach (δ{sup 15}N–NO{sub 3}{sup −} and δ{sup 18}O–NO{sub 3}{sup −}). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO{sub 3}{sup −} to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO{sub 3}{sup −}, better

  10. Nitrate source apportionment in a subtropical watershed using Bayesian model

    International Nuclear Information System (INIS)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao; Shi, Jiachun; Wu, Laosheng; Jiang, Yonghai

    2013-01-01

    Nitrate (NO 3 − ) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO 3 − concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L −1 ) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L −1 ). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L −1 NO 3 − . Four sources of NO 3 − (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl − , NO 3 − , HCO 3 − , SO 4 2− , Ca 2+ , K + , Mg 2+ , Na + , dissolved oxygen (DO)] and dual isotope approach (δ 15 N–NO 3 − and δ 18 O–NO 3 − ). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO 3 − to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO 3 − , better agricultural management practices and sewage disposal programs can be implemented to sustain water quality in subtropical watersheds

  11. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  12. Receptor models for source apportionment of remote aerosols in Brazil

    International Nuclear Information System (INIS)

    Artaxo Netto, P.E.

    1985-11-01

    The PIXE (particle induced X-ray emission), and PESA (proton elastic scattering analysis) method were used in conjunction with receptor models for source apportionment of remote aerosols in Brazil. The PIXE used in the determination of concentration for elements with Z >- 11, has a detection limit of about 1 ng/m 3 . The concentrations of carbon, nitrogen and oxygen in the fine fraction of Amazon Basin aerosols was measured by PESA. We sampled in Jureia (SP), Fernando de Noronha, Arembepe (BA), Firminopolis (GO), Itaberai (GO) and Amazon Basin. For collecting the airbone particles we used cascade impactors, stacked filter units, and streaker samplers. Three receptor models were used: chemical mass balance, stepwise multiple regression analysis and principal factor analysis. The elemental and gravimetric concentrations were explained by the models within the experimental errors. Three sources of aerosol were quantitatively distinguished: marine aerosol, soil dust and aerosols related to forests. The emission of aerosols by vegetation is very clear for all the sampling sites. In Amazon Basin and Jureia it is the major source, responsible for 60 to 80% of airborne concentrations. (Author) [pt

  13. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Science.gov (United States)

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  14. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Directory of Open Access Journals (Sweden)

    Obioma Nwankwo

    Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  15. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  16. Receptor Model Source Apportionment of Nonmethane Hydrocarbons in Mexico City

    Directory of Open Access Journals (Sweden)

    V. Mugica

    2002-01-01

    Full Text Available With the purpose of estimating the source contributions of nonmethane hydrocarbons (NMHC to the atmosphere at three different sites in the Mexico City Metropolitan Area, 92 ambient air samples were measured from February 23 to March 22 of 1997. Light- and heavy-duty vehicular profiles were determined to differentiate the NMHC contribution of diesel and gasoline to the atmosphere. Food cooking source profiles were also determined for chemical mass balance receptor model application. Initial source contribution estimates were carried out to determine the adequate combination of source profiles and fitting species. Ambient samples of NMHC were apportioned to motor vehicle exhaust, gasoline vapor, handling and distribution of liquefied petroleum gas (LP gas, asphalt operations, painting operations, landfills, and food cooking. Both gasoline and diesel motor vehicle exhaust were the major NMHC contributors for all sites and times, with a percentage of up to 75%. The average motor vehicle exhaust contributions increased during the day. In contrast, LP gas contribution was higher during the morning than in the afternoon. Apportionment for the most abundant individual NMHC showed that the vehicular source is the major contributor to acetylene, ethylene, pentanes, n-hexane, toluene, and xylenes, while handling and distribution of LP gas was the major source contributor to propane and butanes. Comparison between CMB estimates of NMHC and the emission inventory showed a good agreement for vehicles, handling and distribution of LP gas, and painting operations; nevertheless, emissions from diesel exhaust and asphalt operations showed differences, and the results suggest that these emissions could be underestimated.

  17. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  18. Spanning forests and the vector bundle Laplacian

    OpenAIRE

    Kenyon, Richard

    2011-01-01

    The classical matrix-tree theorem relates the determinant of the combinatorial Laplacian on a graph to the number of spanning trees. We generalize this result to Laplacians on one- and two-dimensional vector bundles, giving a combinatorial interpretation of their determinants in terms of so-called cycle rooted spanning forests (CRSFs). We construct natural measures on CRSFs for which the edges form a determinantal process. ¶ This theory gives a natural generalization of the spanning tre...

  19. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Energy Technology Data Exchange (ETDEWEB)

    Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)

    2012-07-01

    The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric

  20. Radiation effects on life span in Caenorhabditis elegans

    International Nuclear Information System (INIS)

    Johnson, T.E.; Hartman, P.S.

    1988-01-01

    Wild-type and radiation-sensitive (Rad) mutants of Caenorhabditis elegans were irradiated using a 137 Cs source (2.7 krads/min.) at several developmental stages and subsequently monitored for life span. Acute doses of radiation ranged from 1 krad to 300 krads. All stages required doses above 100 krads to reduce mean life span. Dauers and third stage larvae were more sensitive, and 8-day-old adults were the most resistant. Occasional statistically significant but nonrepeatable increases in survival were observed after intermediate levels of irradiation (10-30 krads). Unirradiated rad-4 and rad-7 had life spans similar to wild-type; all others had a significant reduction in survival. The mutants were about as sensitive as wild-type to the effects of ionizing radiation including occasional moderate life span extensions at intermediate doses. We conclude that the moderate life span extensions sometimes observed after irradiation are likely to be mediated by a means other than the induction of DNA repair enzymes

  1. Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhen [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFranchi, Brian W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ivey, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Schrader, Paul E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Michelsen, Hope A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bambha, Ray P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-09-01

    In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF

  2. Robustness Analysis of Big Span Glulam Truss Structure

    DEFF Research Database (Denmark)

    Rajčié, V.; čizmar, D.; Kirkegaard, Poul Henning

    2010-01-01

    (Eurocode 0 &1, Probabilistic model code etc.) Based on a project of big span glulam truss structure, build in Croatia few years ago, a probabilistic model is made with four failure elements. Reliability analysis of components is conducted and based on this a robustness analysis is preformed. It can...

  3. Modeling of low pressure plasma sources for microelectronics fabrication

    International Nuclear Information System (INIS)

    Agarwal, Ankur; Bera, Kallol; Kenney, Jason; Rauf, Shahid; Likhanskii, Alexandre

    2017-01-01

    Chemically reactive plasmas operating in the 1 mTorr–10 Torr pressure range are widely used for thin film processing in the semiconductor industry. Plasma modeling has come to play an important role in the design of these plasma processing systems. A number of 3-dimensional (3D) fluid and hybrid plasma modeling examples are used to illustrate the role of computational investigations in design of plasma processing hardware for applications such as ion implantation, deposition, and etching. A model for a rectangular inductively coupled plasma (ICP) source is described, which is employed as an ion source for ion implantation. It is shown that gas pressure strongly influences ion flux uniformity, which is determined by the balance between the location of plasma production and diffusion. The effect of chamber dimensions on plasma uniformity in a rectangular capacitively coupled plasma (CCP) is examined using an electromagnetic plasma model. Due to high pressure and small gap in this system, plasma uniformity is found to be primarily determined by the electric field profile in the sheath/pre-sheath region. A 3D model is utilized to investigate the confinement properties of a mesh in a cylindrical CCP. Results highlight the role of hole topology and size on the formation of localized hot-spots. A 3D electromagnetic plasma model for a cylindrical ICP is used to study inductive versus capacitive power coupling and how placement of ground return wires influences it. Finally, a 3D hybrid plasma model for an electron beam generated magnetized plasma is used to understand the role of reactor geometry on plasma uniformity in the presence of E  ×  B drift. (paper)

  4. Modeling of low pressure plasma sources for microelectronics fabrication

    Science.gov (United States)

    Agarwal, Ankur; Bera, Kallol; Kenney, Jason; Likhanskii, Alexandre; Rauf, Shahid

    2017-10-01

    Chemically reactive plasmas operating in the 1 mTorr-10 Torr pressure range are widely used for thin film processing in the semiconductor industry. Plasma modeling has come to play an important role in the design of these plasma processing systems. A number of 3-dimensional (3D) fluid and hybrid plasma modeling examples are used to illustrate the role of computational investigations in design of plasma processing hardware for applications such as ion implantation, deposition, and etching. A model for a rectangular inductively coupled plasma (ICP) source is described, which is employed as an ion source for ion implantation. It is shown that gas pressure strongly influences ion flux uniformity, which is determined by the balance between the location of plasma production and diffusion. The effect of chamber dimensions on plasma uniformity in a rectangular capacitively coupled plasma (CCP) is examined using an electromagnetic plasma model. Due to high pressure and small gap in this system, plasma uniformity is found to be primarily determined by the electric field profile in the sheath/pre-sheath region. A 3D model is utilized to investigate the confinement properties of a mesh in a cylindrical CCP. Results highlight the role of hole topology and size on the formation of localized hot-spots. A 3D electromagnetic plasma model for a cylindrical ICP is used to study inductive versus capacitive power coupling and how placement of ground return wires influences it. Finally, a 3D hybrid plasma model for an electron beam generated magnetized plasma is used to understand the role of reactor geometry on plasma uniformity in the presence of E  ×  B drift.

  5. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  6. Particle model of a cylindrical inductively coupled ion source

    Science.gov (United States)

    Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.

    2017-08-01

    In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.

  7. A theoretical model of a liquid metal ion source

    International Nuclear Information System (INIS)

    Kingham, D.R.; Swanson, L.W.

    1984-01-01

    A model of liquid metal ion source (LMIS) operation has been developed which gives a consistent picture of three different aspects of LMI sources: (i) the shape and size of the ion emitting region; (ii) the mechanism of ion formation; (iii) properties of the ion beam such as angular intensity and energy spread. It was found that the emitting region takes the shape of a jet-like protrusion on the end of a Taylor cone with ion emission from an area only a few tens of A across, in agreement with recent TEM pictures by Sudraud. This is consistent with ion formation predominantly by field evaporation. Calculated angular intensities and current-voltage characteristics based on our fluid dynamic jet-like protrusion model agree well with experiment. The formation of doubly charged ions is attributed to post-ionization of field evaporated singly charged ions and an apex field strength of about 2.0 V A -1 was calculated for a Ga source. The ion energy spread is mainly due to space charge effects, it is known to be reduced for doubly charged ions in agreement with this post-ionization mechanism. (author)

  8. Extended gamma sources modelling using multipole expansion: Application to the Tunisian gamma source load planning

    International Nuclear Information System (INIS)

    Loussaief, Abdelkader

    2007-01-01

    In this work we extend the use of multipole moments expansion to the case of inner radiation fields. A series expansion of the photon flux was established. The main advantage of this approach is that it offers the opportunity to treat both inner and external radiation field cases. We determined the expression of the inner multipole moments in both spherical harmonics and in cartesian coordinates. As an application we applied the analytical model to a radiation facility used for small target irradiation. Theoretical, experimental and simulation studies were performed, in air and in a product, and good agreement was reached.Conventional dose distribution study for gamma irradiation facility involves the use of isodose maps. The establishment of these maps requires the measurement of the absorbed dose in many points, which makes the task expensive experimentally and very long by simulation. However, a lack of points of measurement can distort the dose distribution cartography. To overcome these problems, we present in this paper a mathematical method to describe the dose distribution in air. This method is based on the multipole expansion in spherical harmonics of the photon flux emitted by the gamma source. The determination of the multipole coefficients of this development allows the modeling of the radiation field around the gamma source. (Author)

  9. SOURCE 2.0 model development: UO2 thermal properties

    International Nuclear Information System (INIS)

    Reid, P.J.; Richards, M.J.; Iglesias, F.C.; Brito, A.C.

    1997-01-01

    During analysis of CANDU postulated accidents, the reactor fuel is estimated to experience large temperature variations and to be exposed to a variety of environments from highly oxidized to mildly reducing. The exposure of CANDU fuel to these environments and temperatures may affect fission product releases from the fuel and cause degradation of the fuel thermal properties. The SOURCE 2.0 project is a safety analysis code which will model the necessary mechanisms required to calculate fission product release for a variety of accident scenarios, including large break loss of coolant accidents (LOCAs) with or without emergency core cooling. The goal of the model development is to generate models which are consistent with each other and phenomenologically based, insofar as that is possible given the state of theoretical understanding

  10. RF Plasma modeling of the Linac4 H− ion source

    CERN Document Server

    Mattei, S; Hatayama, A; Lettry, J; Kawamura, Y; Yasumoto, M; Schmitzer, C

    2013-01-01

    This study focuses on the modelling of the ICP RF-plasma in the Linac4 H− ion source currently being constructed at CERN. A self-consistent model of the plasma dynamics with the RF electromagnetic field has been developed by a PIC-MCC method. In this paper, the model is applied to the analysis of a low density plasma discharge initiation, with particular interest on the effect of the external magnetic field on the plasma properties, such as wall loss, electron density and electron energy. The use of a multi-cusp magnetic field effectively limits the wall losses, particularly in the radial direction. Preliminary results however indicate that a reduced heating efficiency results in such a configuration. The effect is possibly due to trapping of electrons in the multi-cusp magnetic field, preventing their continuous acceleration in the azimuthal direction.

  11. How to Model Super-Soft X-ray Sources?

    Science.gov (United States)

    Rauch, Thomas

    2012-07-01

    During outbursts, the surface temperatures of white dwarfs in cataclysmic variables exceed by far half a million Kelvin. In this phase, they may become the brightest super-soft sources (SSS) in the sky. Time-series of high-resolution, high S/N X-ray spectra taken during rise, maximum, and decline of their X-ray luminosity provide insights into the processes following such outbursts as well as in the surface composition of the white dwarf. Their analysis requires adequate NLTE model atmospheres. The Tuebingen Non-LTE Model-Atmosphere Package (TMAP) is a powerful tool for their calculation. We present the application of TMAP models to SSS spectra and discuss their validity.

  12. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  13. Life span extension and neuronal cell protection by Drosophila nicotinamidase.

    Science.gov (United States)

    Balan, Vitaly; Miller, Gregory S; Kaplun, Ludmila; Balan, Karina; Chong, Zhao-Zhong; Li, Faqi; Kaplun, Alexander; VanBerkum, Mark F A; Arking, Robert; Freeman, D Carl; Maiese, Kenneth; Tzivion, Guri

    2008-10-10

    The life span of model organisms can be modulated by environmental conditions that influence cellular metabolism, oxidation, or DNA integrity. The yeast nicotinamidase gene pnc1 was identified as a key transcriptional target and mediator of calorie restriction and stress-induced life span extension. PNC1 is thought to exert its effect on yeast life span by modulating cellular nicotinamide and NAD levels, resulting in increased activity of Sir2 family class III histone deacetylases. In Caenorhabditis elegans, knockdown of a pnc1 homolog was shown recently to shorten the worm life span, whereas its overexpression increased survival under conditions of oxidative stress. The function and regulation of nicotinamidases in higher organisms has not been determined. Here, we report the identification and biochemical characterization of the Drosophila nicotinamidase, D-NAAM, and demonstrate that its overexpression significantly increases median and maximal fly life span. The life span extension was reversed in Sir2 mutant flies, suggesting Sir2 dependence. Testing for physiological effectors of D-NAAM in Drosophila S2 cells, we identified oxidative stress as a primary regulator, both at the transcription level and protein activity. In contrast to the yeast model, stress factors such as high osmolarity and heat shock, calorie restriction, or inhibitors of TOR and phosphatidylinositol 3-kinase pathways do not appear to regulate D-NAAM in S2 cells. Interestingly, the expression of D-NAAM in human neuronal cells conferred protection from oxidative stress-induced cell death in a sirtuin-dependent manner. Together, our findings establish a life span extending the ability of nicotinamidase in flies and offer a role for nicotinamide-modulating genes in oxidative stress regulated pathways influencing longevity and neuronal cell survival.

  14. Modeling Degradation Product Partitioning in Chlorinated-DNAPL Source Zones

    Science.gov (United States)

    Boroumand, A.; Ramsburg, A.; Christ, J.; Abriola, L.

    2009-12-01

    Metabolic reductive dechlorination degrades aqueous phase contaminant concentrations, increasing the driving force for DNAPL dissolution. Results from laboratory and field investigations suggest that accumulation of cis-dichloroethene (cis-DCE) and vinyl chloride (VC) may occur within DNAPL source zones. The lack of (or slow) degradation of cis-DCE and VC within bioactive DNAPL source zones may result in these dechlorination products becoming distributed among the solid, aqueous, and organic phases. Partitioning of cis-DCE and VC into the organic phase may reduce aqueous phase concentrations of these contaminants and result in the enrichment of these dechlorination products within the non-aqueous phase. Enrichment of degradation products within DNAPL may reduce some of the advantages associated with the application of bioremediation in DNAPL source zones. Thus, it is important to quantify how partitioning (between the aqueous and organic phases) influences the transport of cis-DCE and VC within bioactive DNAPL source zones. In this work, abiotic two-phase (PCE-water) one-dimensional column experiments are modeled using analytical and numerical methods to examine the rate of partitioning and the capacity of PCE-DNAPL to reversibly sequester cis-DCE. These models consider aqueous-phase, nonaqueous phase, and aqueous plus nonaqueous phase mass transfer resistance using linear driving force and spherical diffusion expressions. Model parameters are examined and compared for different experimental conditions to evaluate the mechanisms controlling partitioning. Biot number, a dimensionless number which is an index of the ratio of the aqueous phase mass transfer rate in boundary layer to the mass transfer rate within the NAPL, is used to characterize conditions in which either or both processes are controlling. Results show that application of a single aqueous resistance is capable to capture breakthrough curves when DNAPL is distributed in porous media as low

  15. Cardiac magnetic source imaging based on current multipole model

    International Nuclear Information System (INIS)

    Tang Fa-Kuan; Wang Qian; Hua Ning; Lu Hong; Tang Xue-Zheng; Ma Ping

    2011-01-01

    It is widely accepted that the heart current source can be reduced into a current multipole. By adopting three linear inverse methods, the cardiac magnetic imaging is achieved in this article based on the current multipole model expanded to the first order terms. This magnetic imaging is realized in a reconstruction plane in the centre of human heart, where the current dipole array is employed to represent realistic cardiac current distribution. The current multipole as testing source generates magnetic fields in the measuring plane, serving as inputs of cardiac magnetic inverse problem. In the heart-torso model constructed by boundary element method, the current multipole magnetic field distribution is compared with that in the homogeneous infinite space, and also with the single current dipole magnetic field distribution. Then the minimum-norm least-squares (MNLS) method, the optimal weighted pseudoinverse method (OWPIM), and the optimal constrained linear inverse method (OCLIM) are selected as the algorithms for inverse computation based on current multipole model innovatively, and the imaging effects of these three inverse methods are compared. Besides, two reconstructing parameters, residual and mean residual, are also discussed, and their trends under MNLS, OWPIM and OCLIM each as a function of SNR are obtained and compared. (general)

  16. Understanding retirement: the promise of life-span developmental frameworks.

    Science.gov (United States)

    Löckenhoff, Corinna E

    2012-09-01

    The impending retirement of large population cohorts creates a pressing need for practical interventions to optimize outcomes at the individual and societal level. This necessitates comprehensive theoretical models that acknowledge the multi-layered nature of the retirement process and shed light on the dynamic mechanisms that drive longitudinal patterns of adjustment. The present commentary highlights ways in which contemporary life-span developmental frameworks can inform retirement research, drawing on the specific examples of Bronfenbrenner's Ecological Model, Baltes and Baltes Selective Optimization with Compensation Framework, Schulz and Heckhausen's Motivational Theory of Life-Span Development, and Carstensen's Socioemotional Selectivity Theory. Ultimately, a life-span developmental perspective on retirement offers not only new interpretations of known phenomena but may also help to identify novel directions for future research as well as promising pathways for interventions.

  17. A model for managing sources of groundwater pollution

    Science.gov (United States)

    Gorelick, Steven M.

    1982-01-01

    The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.

  18. Plant model of KIPT neutron source facility simulator

    International Nuclear Information System (INIS)

    Cao, Yan; Wei, Thomas Y.; Grelle, Austin L.; Gohar, Yousry

    2016-01-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  19. Plant model of KIPT neutron source facility simulator

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Yan [Argonne National Lab. (ANL), Argonne, IL (United States); Wei, Thomas Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Grelle, Austin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  20. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  1. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Open-source Software for Exoplanet Atmospheric Modeling

    Science.gov (United States)

    Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph

    2018-01-01

    I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.

  3. Canada in 3D - Toward a Sustainable 3D Model for Canadian Geology from Diverse Data Sources

    Science.gov (United States)

    Brodaric, B.; Pilkington, M.; Snyder, D. B.; St-Onge, M. R.; Russell, H.

    2015-12-01

    Many big science issues span large areas and require data from multiple heterogeneous sources, for example climate change, resource management, and hazard mitigation. Solutions to these issues can significantly benefit from access to a consistent and integrated geological model that would serve as a framework. However, such a model is absent for most large countries including Canada, due to the size of the landmass and the fragmentation of the source data into institutional and disciplinary silos. To overcome these barriers, the "Canada in 3D" (C3D) pilot project was recently launched by the Geological Survey of Canada. C3D is designed to be evergreen, multi-resolution, and inter-disciplinary: (a) it is to be updated regularly upon acquisition of new data; (b) portions vary in resolution and will initially consist of four layers (surficial, sedimentary, crystalline, and mantle) with intermediary patches of higher-resolution fill; and (c) a variety of independently managed data sources are providing inputs, such as geophysical, 3D and 2D geological models, drill logs, and others. Notably, scalability concerns dictate a decentralized and interoperable approach, such that only key control objects, denoting anchors for the modeling process, are imported into the C3D database while retaining provenance links to original sources. The resultant model is managed in the database, contains full modeling provenance as well as links to detailed information on rock units, and is to be visualized in desktop and online environments. It is anticipated that C3D will become the authoritative state of knowledge for the geology of Canada at a national scale.

  4. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  5. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  6. A source-controlled data center network model.

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  7. Source characterization and dynamic fault modeling of induced seismicity

    Science.gov (United States)

    Lui, S. K. Y.; Young, R. P.

    2017-12-01

    In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.

  8. A source-controlled data center network model

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  9. A Model fot the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, S. K.; Mikic, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to approx.60deg, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model. Key words: solar wind - Sun: corona - Sun: magnetic topology

  10. A Model for the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, S. K.; Mikić, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-04-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to ~60°, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model.

  11. Source modelling at the dawn of gravitational-wave astronomy

    Science.gov (United States)

    Gerosa, Davide

    2016-09-01

    The age of gravitational-wave astronomy has begun. Gravitational waves are propagating spacetime perturbations ("ripples in the fabric of space-time") predicted by Einstein's theory of General Relativity. These signals propagate at the speed of light and are generated by powerful astrophysical events, such as the merger of two black holes and supernova explosions. The first detection of gravitational waves was performed in 2015 with the LIGO interferometers. This constitutes a tremendous breakthrough in fundamental physics and astronomy: it is not only the first direct detection of such elusive signals, but also the first irrefutable observation of a black-hole binary system. The future of gravitational-wave astronomy is bright and loud: the LIGO experiments will soon be joined by a network of ground-based interferometers; the space mission eLISA has now been fully approved by the European Space Agency with a proof-of-concept mission called LISA Pathfinder launched in 2015. Gravitational-wave observations will provide unprecedented tests of gravity as well as a qualitatively new window on the Universe. Careful theoretical modelling of the astrophysical sources of gravitational-waves is crucial to maximize the scientific outcome of the detectors. In this Thesis, we present several advances on gravitational-wave source modelling, studying in particular: (i) the precessional dynamics of spinning black-hole binaries; (ii) the astrophysical consequences of black-hole recoils; and (iii) the formation of compact objects in the framework of scalar-tensor theories of gravity. All these phenomena are deeply characterized by a continuous interplay between General Relativity and astrophysics: despite being a truly relativistic messenger, gravitational waves encode details of the astrophysical formation and evolution processes of their sources. We work out signatures and predictions to extract such information from current and future observations. At the dawn of a revolutionary

  12. Self-consistent modeling of electron cyclotron resonance ion sources

    International Nuclear Information System (INIS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lecot, C.

    2004-01-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally

  13. Self-consistent modeling of electron cyclotron resonance ion sources

    Science.gov (United States)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  14. Modeling and simulation of RF photoinjectors for coherent light sources

    Science.gov (United States)

    Chen, Y.; Krasilnikov, M.; Stephan, F.; Gjonaj, E.; Weiland, T.; Dohlus, M.

    2018-05-01

    We propose a three-dimensional fully electromagnetic numerical approach for the simulation of RF photoinjectors for coherent light sources. The basic idea consists in incorporating a self-consistent photoemission model within a particle tracking code. The generation of electron beams in the injector is determined by the quantum efficiency (QE) of the cathode, the intensity profile of the driving laser as well as by the accelerating field and magnetic focusing conditions in the gun. The total charge emitted during an emission cycle can be limited by the space charge field at the cathode. Furthermore, the time and space dependent electromagnetic field at the cathode may induce a transient modulation of the QE due to surface barrier reduction of the emitting layer. In our modeling approach, all these effects are taken into account. The beam particles are generated dynamically according to the local QE of the cathode and the time dependent laser intensity profile. For the beam dynamics, a tracking code based on the Lienard-Wiechert retarded field formalism is employed. This code provides the single particle trajectories as well as the transient space charge field distribution at the cathode. As an application, the PITZ injector is considered. Extensive electron bunch emission simulations are carried out for different operation conditions of the injector, in the source limited as well as in the space charge limited emission regime. In both cases, fairly good agreement between measurements and simulations is obtained.

  15. Fine particulates over South Asia: Review and meta-analysis of PM2.5 source apportionment through receptor model.

    Science.gov (United States)

    Singh, Nandita; Murari, Vishnu; Kumar, Manish; Barman, S C; Banerjee, Tirthankar

    2017-04-01

    Fine particulates (PM 2.5 ) constitute dominant proportion of airborne particulates and have been often associated with human health disorders, changes in regional climate, hydrological cycle and more recently to food security. Intrinsic properties of particulates are direct function of sources. This initiates the necessity of conducting a comprehensive review on PM 2.5 sources over South Asia which in turn may be valuable to develop strategies for emission control. Particulate source apportionment (SA) through receptor models is one of the existing tool to quantify contribution of particulate sources. Review of 51 SA studies were performed of which 48 (94%) were appeared within a span of 2007-2016. Almost half of SA studies (55%) were found concentrated over few typical urban stations (Delhi, Dhaka, Mumbai, Agra and Lahore). Due to lack of local particulate source profile and emission inventory, positive matrix factorization and principal component analysis (62% of studies) were the primary choices, followed by chemical mass balance (CMB, 18%). Metallic species were most regularly used as source tracers while use of organic molecular markers and gas-to-particle conversion were minimum. Among all the SA sites, vehicular emissions (mean ± sd: 37 ± 20%) emerged as most dominating PM 2.5 source followed by industrial emissions (23 ± 16%), secondary aerosols (22 ± 12%) and natural sources (20 ± 15%). Vehicular emissions (39 ± 24%) also identified as dominating source for highly polluted sites (PM 2.5 >100 μgm -3 , n = 15) while site specific influence of either or in combination of industrial, secondary aerosols and natural sources were recognized. Source specific trends were considerably varied in terms of region and seasonality. Both natural and industrial sources were most influential over Pakistan and Afghanistan while over Indo-Gangetic plain, vehicular, natural and industrial emissions appeared dominant. Influence of vehicular emission was

  16. Remote quantification of phycocyanin in potable water sources through an adaptive model

    Science.gov (United States)

    Song, Kaishan; Li, Lin; Tedesco, Lenore P.; Li, Shuai; Hall, Bob E.; Du, Jia

    2014-09-01

    Cyanobacterial blooms in water supply sources in both central Indiana USA (CIN) and South Australia (SA) are a cause of great concerns for toxin production and water quality deterioration. Remote sensing provides an effective approach for quick assessment of cyanobacteria through quantification of phycocyanin (PC) concentration. In total, 363 samples spanning a large variation of optically active constituents (OACs) in CIN and SA waters were collected during 24 field surveys. Concurrently, remote sensing reflectance spectra (Rrs) were measured. A partial least squares-artificial neural network (PLS-ANN) model, artificial neural network (ANN) and three-band model (TBM) were developed or tuned by relating the Rrs with PC concentration. Our results indicate that the PLS-ANN model outperformed the ANN and TBM with both the original spectra and simulated ESA/Sentinel-3/Ocean and Land Color Instrument (OLCI) and EO-1/Hyperion spectra. The PLS-ANN model resulted in a high coefficient of determination (R2) for CIN dataset (R2 = 0.92, R: 0.3-220.7 μg/L) and SA (R2 = 0.98, R: 0.2-13.2 μg/L). In comparison, the TBM model yielded an R2 = 0.77 and 0.94 for the CIN and SA datasets, respectively; while the ANN obtained an intermediate modeling accuracy (CIN: R2 = 0.86; SA: R2 = 0.95). Applying the simulated OLCI and Hyperion aggregated datasets, the PLS-ANN model still achieved good performance (OLCI: R2 = 0.84; Hyperion: R2 = 0.90); the TBM also presented acceptable performance for PC estimations (OLCI: R2 = 0.65, Hyperion: R2 = 0.70). Based on the results, the PLS-ANN is an effective modeling approach for the quantification of PC in productive water supplies based on its effectiveness in solving the non-linearity of PC with other OACs. Furthermore, our investigation indicates that the ratio of inorganic suspended matter (ISM) to PC concentration has close relationship to modeling relative errors (CIN: R2 = 0.81; SA: R2 = 0.92), indicating that ISM concentration exert

  17. The use of passwords to introduce theconcepts of spanning set and span

    Directory of Open Access Journals (Sweden)

    Andrea Cárcamo

    2017-01-01

    Full Text Available The aim of this paper is to present a proposal for teaching linear algebra based on heuristic of emergent models and mathematical modelling. This proposal begins with a problematic situation  related  to  the  creation  and  use  of  secure  passwords,  which  leads  first-year  students  of  engineering  toward  the  construction  of  the  concepts  of  spanning  set  and  span. The  proposal  is  designed  from  the  results  of  the  two  cycles  of  experimentation  teaching, design-based  research,  which  give  evidence  that  allows  students  to  progress  from  a  situation in a real context to the concepts of linear algebra. This proposal, previously adapted, could have similar results when applied to another group of students.

  18. Life span in online communities

    Science.gov (United States)

    Grabowski, A.; Kosiński, R. A.

    2010-12-01

    Recently online communities have attracted great interest and have become an important medium of information exchange between users. The aim of this work is to introduce a simple model of the evolution of online communities. This model describes (a) the time evolution of users’ activity in a web service, e.g., the time evolution of the number of online friends or written posts, (b) the time evolution of the degree distribution of a social network, and (c) the time evolution of the number of active users of a web service. In the second part of the paper we investigate the influence of the users’ lifespan (i.e., the total time in which they are active in an online community) on the process of rumor propagation in evolving social networks. Viral marketing is an important application of such method of information propagation.

  19. Individual differences in personality change across the adult life span.

    Science.gov (United States)

    Schwaba, Ted; Bleidorn, Wiebke

    2018-06-01

    A precise and comprehensive description of personality continuity and change across the life span is the bedrock upon which theories of personality development are built. Little research has quantified the degree to which individuals deviate from mean-level developmental trends. In this study, we addressed this gap by examining individual differences in personality trait change across the life span. Data came from a nationally representative sample of 9,636 Dutch participants who provided Big Five self-reports at five assessment waves across 7 years. We divided our sample into 14 age groups (ages 16-84 at initial measurement) and estimated latent growth curve models to describe individual differences in personality change across the study period for each trait and age group. Across the adult life span, individual differences in personality change were small but significant until old age. For Openness, Conscientiousness, Extraversion, and Agreeableness, individual differences in change were most pronounced in emerging adulthood and decreased throughout midlife and old age. For Emotional Stability, individual differences in change were relatively consistent across the life span. These results inform theories of life span development and provide future directions for research on the causes and conditions of personality change. © 2017 Wiley Periodicals, Inc.

  20. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  1. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  2. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  3. Fast temperature optimization of multi-source hyperthermia applicators with reduced-order modeling of 'virtual sources'

    International Nuclear Information System (INIS)

    Cheng, K-S; Stakhursky, Vadim; Craciunescu, Oana I; Stauffer, Paul; Dewhirst, Mark; Das, Shiva K

    2008-01-01

    The goal of this work is to build the foundation for facilitating real-time magnetic resonance image guided patient treatment for heating systems with a large number of physical sources (e.g. antennas). Achieving this goal requires knowledge of how the temperature distribution will be affected by changing each source individually, which requires time expenditure on the order of the square of the number of sources. To reduce computation time, we propose a model reduction approach that combines a smaller number of predefined source configurations (fewer than the number of actual sources) that are most likely to heat tumor. The source configurations consist of magnitude and phase source excitation values for each actual source and may be computed from a CT scan based plan or a simplified generic model of the corresponding patient anatomy. Each pre-calculated source configuration is considered a 'virtual source'. We assume that the actual best source settings can be represented effectively as weighted combinations of the virtual sources. In the context of optimization, each source configuration is treated equivalently to one physical source. This model reduction approach is tested on a patient upper-leg tumor model (with and without temperature-dependent perfusion), heated using a 140 MHz ten-antenna cylindrical mini-annular phased array. Numerical simulations demonstrate that using only a few pre-defined source configurations can achieve temperature distributions that are comparable to those from full optimizations using all physical sources. The method yields close to optimal temperature distributions when using source configurations determined from a simplified model of the tumor, even when tumor position is erroneously assumed to be ∼2.0 cm away from the actual position as often happens in practical clinical application of pre-treatment planning. The method also appears to be robust under conditions of changing, nonlinear, temperature-dependent perfusion. The

  4. Minimum spanning trees and random resistor networks in d dimensions.

    Science.gov (United States)

    Read, N

    2005-09-01

    We consider minimum-cost spanning trees, both in lattice and Euclidean models, in d dimensions. For the cost of the optimum tree in a box of size L , we show that there is a correction of order L(theta) , where theta or =1 . The arguments all rely on the close relation of Kruskal's greedy algorithm for the minimum spanning tree, percolation, and (for some arguments) random resistor networks. The scaling of the entropy and free energy at small nonzero T , and hence of the number of near-optimal solutions, is also discussed. We suggest that the Steiner tree problem is in the same universality class as the minimum spanning tree in all dimensions, as is the traveling salesman problem in two dimensions. Hence all will have the same value of theta=-3/4 in two dimensions.

  5. The span of correlations in dolphin whistle sequences

    International Nuclear Information System (INIS)

    Ferrer-i-Cancho, Ramon; McCowan, Brenda

    2012-01-01

    Long-range correlations are found in symbolic sequences from human language, music and DNA. Determining the span of correlations in dolphin whistle sequences is crucial for shedding light on their communicative complexity. Dolphin whistles share various statistical properties with human words, i.e. Zipf's law for word frequencies (namely that the probability of the ith most frequent word of a text is about i −α ) and a parallel of the tendency of more frequent words to have more meanings. The finding of Zipf's law for word frequencies in dolphin whistles has been the topic of an intense debate on its implications. One of the major arguments against the relevance of Zipf's law in dolphin whistles is that it is not possible to distinguish the outcome of a die-rolling experiment from that of a linguistic or communicative source producing Zipf's law for word frequencies. Here we show that statistically significant whistle–whistle correlations extend back to the second previous whistle in the sequence, using a global randomization test, and to the fourth previous whistle, using a local randomization test. None of these correlations are expected by a die-rolling experiment and other simple explanations of Zipf's law for word frequencies, such as Simon's model, that produce sequences of unpredictable elements

  6. Sexual conflict, life span, and aging.

    Science.gov (United States)

    Adler, Margo I; Bonduriansky, Russell

    2014-06-17

    The potential for sexual conflict to influence the evolution of life span and aging has been recognized for more than a decade, and recent work also suggests that variation in life span and aging can influence sexually antagonistic coevolution. However, empirical exploration of these ideas is only beginning. Here, we provide an overview of the ideas and evidence linking inter- and intralocus sexual conflicts with life span and aging. We aim to clarify the conceptual basis of this research program, examine the current state of knowledge, and suggest key questions for further investigation. Copyright © 2014 Cold Spring Harbor Laboratory Press; all rights reserved.

  7. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    Science.gov (United States)

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  8. Modelling and optimisation of fs laser-produced Kα sources

    International Nuclear Information System (INIS)

    Gibbon, P.; Masek, M.; Teubner, U.; Lu, W.; Nicoul, M.; Shymanovich, U.; Tarasevitch, A.; Zhou, P.; Sokolowski-Tinten, K.; Linde, D. von der

    2009-01-01

    Recent theoretical and numerical studies of laser-driven femtosecond K α sources are presented, aimed at understanding a recent experimental campaign to optimize emission from thin coating targets. Particular attention is given to control over the laser-plasma interaction conditions defined by the interplay between a controlled prepulse and the angle of incidence. It is found that the x-ray efficiency for poor-contrast laser systems in which a large preplasma is suspected can be enhanced by using a near-normal incidence geometry even at high laser intensities. With high laser contrast, similar efficiencies can be achieved by going to larger incidence angles, but only at the expense of larger X-ray spot size. New developments in three-dimensional modelling are also reported with the goal of handling interactions with geometrically complex targets and finite resistivity. (orig.)

  9. Modeling in control of the Advanced Light Source

    International Nuclear Information System (INIS)

    Bengtsson, J.; Forest, E.; Nishimura, H.; Schachinger, L.

    1991-05-01

    A software system for control of accelerator physics parameters of the Advanced Light Source (ALS) is being designed and implemented at LBL. Some of the parameters we wish to control are tunes, chromaticities, and closed orbit distortions as well as linear lattice distortions and, possibly, amplitude- and momentum-dependent tune shifts. In all our applications, the goal is to allow the user to adjust physics parameters of the machine, instead of turning knobs that control magnets directly. This control will take place via a highly graphical user interface, with both a model appropriate to the application and any correction algorithm running alongside as separate processes. Many of these applications will run on a Unix workstation, separate from the controls system, but communicating with the hardware database via Remote Procedure Calls (RPCs)

  10. Signal Enhancement with Variable Span Linear Filters

    DEFF Research Database (Denmark)

    Benesty, Jacob; Christensen, Mads Græsbøll; Jensen, Jesper Rindom

    This book introduces readers to the novel concept of variable span speech enhancement filters, and demonstrates how it can be used for effective noise reduction in various ways. Further, the book provides the accompanying Matlab code, allowing readers to easily implement the main ideas discussed....... Variable span filters combine the ideas of optimal linear filters with those of subspace methods, as they involve the joint diagonalization of the correlation matrices of the desired signal and the noise. The book shows how some well-known filter designs, e.g. the minimum distortion, maximum signal......-to-noise ratio, Wiener, and tradeoff filters (including their new generalizations) can be obtained using the variable span filter framework. It then illustrates how the variable span filters can be applied in various contexts, namely in single-channel STFT-based enhancement, in multichannel enhancement in both...

  11. Signal enhancement with variable span linear filters

    CERN Document Server

    Benesty, Jacob; Jensen, Jesper R

    2016-01-01

    This book introduces readers to the novel concept of variable span speech enhancement filters, and demonstrates how it can be used for effective noise reduction in various ways. Further, the book provides the accompanying Matlab code, allowing readers to easily implement the main ideas discussed. Variable span filters combine the ideas of optimal linear filters with those of subspace methods, as they involve the joint diagonalization of the correlation matrices of the desired signal and the noise. The book shows how some well-known filter designs, e.g. the minimum distortion, maximum signal-to-noise ratio, Wiener, and tradeoff filters (including their new generalizations) can be obtained using the variable span filter framework. It then illustrates how the variable span filters can be applied in various contexts, namely in single-channel STFT-based enhancement, in multichannel enhancement in both the time and STFT domains, and, lastly, in time-domain binaural enhancement. In these contexts, the properties of ...

  12. Crowd Sourcing for Challenging Technical Problems and Business Model

    Science.gov (United States)

    Davis, Jeffrey R.; Richard, Elizabeth

    2011-01-01

    Crowd sourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by an organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the Space Life Sciences Directorate (SLSD), with the support of Wyle Integrated Science and Engineering, established and implemented pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, also called "Challenges" or "Technical Needs" by the various open innovation service providers, and were then posted externally to seek solutions. In addition, an open call was issued internally to NASA employees Agency wide (10 Field Centers and NASA HQ) using an open innovation service provider crowd sourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowd sourcing platform designed for internal use by an organization. This platform was customized for NASA use and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging. Similarly, the TopCoder challenge yielded an optimization algorithm for designing a lunar medical kit. The Yet2.com challenges yielded many new industry and academic contacts in bone

  13. Development of an emissions inventory model for mobile sources

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A W; Broderick, B M [Trinity College, Dublin (Ireland). Dept. of Civil, Structural and Environmental Engineering

    2000-07-01

    Traffic represents one of the largest sources of primary air pollutants in urban areas. As a consequence, numerous abatement strategies are being pursued to decrease the ambient concentrations of a wide range of pollutants. A mutual characteristic of most of these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emissions inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for a wide range of vehicle types. The majority of inventories are compiled using 'passive' data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. Current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this paper. a methodology for estimating emissions from mobile sources using real-time data is described. This methodology is used to calculate emissions of sulphur dioxide (SO{sub 2}), oxides of nitrogen (NO{sub x}), carbon monoxide (CO). volatile organic compounds (VOC), particulate matter less than 10 {mu}m aerodynamic diameter (PM{sub 10}), 1,3-butadiene (C{sub 4}H{sub 6}) and benzene (C{sub 6}H{sub 6}) at a test junction in Dublin. Traffic data, which are required on a street-by-street basis, is obtained from induction loops and closed circuit televisions (CCTV) as well as statistical data. The observed traffic data are compared to simulated data from a travel demand model. As a test case, an emissions inventory is compiled for a heavily trafficked signalized junction in an urban environment using the measured data. In order that the model may be validated, the predicted emissions are employed in a dispersion model along with local meteorological conditions and site geometry. The resultant pollutant concentrations are compared to average ambient kerbside conditions

  14. Linking crowding, visual span, and reading.

    Science.gov (United States)

    He, Yingchen; Legge, Gordon E

    2017-09-01

    The visual span is hypothesized to be a sensory bottleneck on reading speed with crowding thought to be the major sensory factor limiting the size of the visual span. This proposed linkage between crowding, visual span, and reading speed is challenged by the finding that training to read crowded letters reduced crowding but did not improve reading speed (Chung, 2007). Here, we examined two properties of letter-recognition training that may influence the transfer to improved reading: the spatial arrangement of training stimuli and the presence of flankers. Three groups of nine young adults were trained with different configurations of letter stimuli at 10° in the lower visual field: a flanked-local group (flanked letters localized at one position), a flanked-distributed group (flanked letters distributed across different horizontal locations), and an isolated-distributed group (isolated and distributed letters). We found that distributed training, but not the presence of flankers, appears to be necessary for the training benefit to transfer to increased reading speed. Localized training may have biased attention to one specific, small area in the visual field, thereby failing to improve reading. We conclude that the visual span represents a sensory bottleneck on reading, but there may also be an attentional bottleneck. Reducing the impact of crowding can enlarge the visual span and can potentially facilitate reading, but not when adverse attentional bias is present. Our results clarify the association between crowding, visual span, and reading.

  15. London SPAN version 4 parameter file format

    International Nuclear Information System (INIS)

    2004-06-01

    Powernext SA is a Multilateral Trading Facility in charge of managing the French power exchange through an optional and anonymous organised trading system. Powernext SA collaborates with the clearing organization LCH.Clearnet SA to secure and facilitate the transactions. The French Standard Portfolio Analysis of Risk (SPAN) is a system used by LCH.Clearnet to calculate the initial margins from and for its clearing members. SPAN is a computerized system which calculates the impact of several possible variations of rates and volatility on by-product portfolios. The initial margin call is equal to the maximum probable loss calculated by the system. This document contains details of the format of the London SPAN version 4 parameter file. This file contains all the parameters and risk arrays required to calculate SPAN margins. London SPAN Version 4 is an upgrade from Version 3, which is also known as LME SPAN. This document contains the full revised file specification, highlighting the changes from Version 3 to Version 4

  16. Source term identification in atmospheric modelling via sparse optimization

    Science.gov (United States)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  17. The effects of age on processing and storage in working memory span tasks and reading comprehension.

    Science.gov (United States)

    Schroeder, Paul J

    2014-01-01

    BACKGROUND/STUDY CONTEXT: Declines in verbal working memory span task performance have been associated with deficits in the language processing abilities of healthy older adults, but it is unclear how storage and processing contribute to this relationship. Moreover, recent studies of the psychometric properties of span measures in the general cognitive literature highlight the need for a critical reassessment of age-related differences in working memory task performance. Forty-two young (Mage = 19.45 years) and 42 older participants (Mage = 73.00 years) completed a series of neuropsychological screening measures, four memory span tasks (one-syllable word span, three-syllable word span, reading span, and sentence span), and a measure of reading comprehension. Each span measure was completed under self-paced and timed encoding conditions. A 2 (age) × 2 (task type) × 2 (encoding conditions) mixed-model design was used. (1) Age effects were reliable for both simple and complex span task performance; (2) limiting the available encoding time yielded lower recall scores across tasks and exacerbated age differences in simple span performance; and (3) both encoding condition and age affected the relationship between each of the span measures and the relationship between span and reading comprehension. Declines in both storage and processing abilities contributed to age differences in span task performance and the relationship between span and reading comprehension. Although older people appear to benefit from task administration protocols that promote successful memory encoding, researchers should be aware of the potential risks to validity posed by such accommodations.

  18. The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle

    OpenAIRE

    Laaksonen, Pekka

    2011-01-01

    Laaksonen, Pekka The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle Jyväskylä: Jyväskylän yliopisto, 2011, 42 s. Tietojärjestelmätiede, kandidaatintutkielma Ohjaaja(t): Käkölä, Timo Tässä kandidaatintutkielmassa selvitettiin sitä, miten the eSourcing Capability Model for Service Providers-mallin käytännöt (practices) ovat liittyneet tietä-myksenhallinnan neljään prosessiin: tiedon luominen, varastointi/noutaminen, jakamine...

  19. Global Processing Speed as a Mediator of Developmental Changes in Children's Auditory Memory Span

    Science.gov (United States)

    Ferguson, A.N.; Bowey, J.A.

    2005-01-01

    This study examined the role of global processing speed in mediating age increases in auditory memory span in 5- to 13-year-olds. Children were tested on measures of memory span, processing speed, single-word speech rate, phonological sensitivity, and vocabulary. Structural equation modeling supported a model in which age-associated increases in…

  20. Exploratory and problem-solving consumer behavior across the life span.

    Science.gov (United States)

    Lesser, J A; Kunkel, S R

    1991-09-01

    Different cognitive functioning, social, and personality changes appear to occur systematically during the adult life span. This article synthesizes research on life span changes in order to develop age-specific models of shopping behavior. The models are tested within a naturalistic field study of shoppers.

  1. Decision-making heuristics and biases across the life span

    Science.gov (United States)

    Strough, JoNell; Karns, Tara E.; Schlosnagle, Leo

    2013-01-01

    We outline a contextual and motivational model of judgment and decision-making (JDM) biases across the life span. Our model focuses on abilities and skills that correspond to deliberative, experiential, and affective decision-making processes. We review research that addresses links between JDM biases and these processes as represented by individual differences in specific abilities and skills (e.g., fluid and crystallized intelligence, executive functioning, emotion regulation, personality traits). We focus on two JDM biases—the sunk-cost fallacy (SCF) and the framing effect. We trace the developmental trajectory of each bias from preschool through middle childhood, adolescence, early adulthood, and later adulthood. We conclude that life-span developmental trajectories differ depending on the bias investigated. Existing research suggests relative stability in the framing effect across the life span and decreases in the SCF with age, including in later life. We highlight directions for future research on JDM biases across the life span, emphasizing the need for process-oriented research and research that increases our understanding of JDM biases in people’s everyday lives. PMID:22023568

  2. Decision-making heuristics and biases across the life span.

    Science.gov (United States)

    Strough, Jonell; Karns, Tara E; Schlosnagle, Leo

    2011-10-01

    We outline a contextual and motivational model of judgment and decision-making (JDM) biases across the life span. Our model focuses on abilities and skills that correspond to deliberative, experiential, and affective decision-making processes. We review research that addresses links between JDM biases and these processes as represented by individual differences in specific abilities and skills (e.g., fluid and crystallized intelligence, executive functioning, emotion regulation, personality traits). We focus on two JDM biases-the sunk-cost fallacy (SCF) and the framing effect. We trace the developmental trajectory of each bias from preschool through middle childhood, adolescence, early adulthood, and later adulthood. We conclude that life-span developmental trajectories differ depending on the bias investigated. Existing research suggests relative stability in the framing effect across the life span and decreases in the SCF with age, including in later life. We highlight directions for future research on JDM biases across the life span, emphasizing the need for process-oriented research and research that increases our understanding of JDM biases in people's everyday lives. © 2011 New York Academy of Sciences.

  3. Model of contamination sources of electron for radiotherapy of beams of photons

    International Nuclear Information System (INIS)

    Gonzalez Infantes, W.; Lallena Rojo, A. M.; Anguiano Millan, M.

    2013-01-01

    Proposes a model of virtual sources of electrons, that allows to reproduce the sources to the input parameters of the representation of the patient. To compare performance in depth values and calculated profiles from the full simulation of the heads, with the calculated values using sources model, found that the model is capable of playing depth dose distributions and profiles. (Author)

  4. Life Span Extension and Neuronal Cell Protection by Drosophila Nicotinamidase*S⃞

    OpenAIRE

    Balan, Vitaly; Miller, Gregory S.; Kaplun, Ludmila; Balan, Karina; Chong, Zhao-Zhong; Li, Faqi; Kaplun, Alexander; VanBerkum, Mark F. A.; Arking, Robert; Freeman, D. Carl; Maiese, Kenneth; Tzivion, Guri

    2008-01-01

    The life span of model organisms can be modulated by environmental conditions that influence cellular metabolism, oxidation, or DNA integrity. The yeast nicotinamidase gene pnc1 was identified as a key transcriptional target and mediator of calorie restriction and stress-induced life span extension. PNC1 is thought to exert its effect on yeast life span by modulating cellular nicotinamide and NAD levels, resulting in increased activity of Sir2 family class III histone ...

  5. The Life Span of the BD-PND Bubble Detector

    International Nuclear Information System (INIS)

    Vanhavere, F.; Loos, M.; Thierens, H.

    1999-01-01

    BD-PND bubble detectors from Bubble Technology Industries (BTI) were used to conduct a study of the life span of these detectors. The manufacturer guarantees an optimum detector performance for three months after receipt. Nevertheless, it is important to know the evolution of their characteristics with time, also after those three months. On a standard set-up with a 252 Cf source the bubble detectors were irradiated until they reached the end of their life span. During this period, the evolution in sensitivity was monitored. The temperature compensating system seems to be the limiting factor with time for the use of the BTI bubble detectors. The change in temperature dependence with age was determined. The same parameters were also checked with several batches of detectors that were used in practice. (author)

  6. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  7. Modeling the explosion-source region: An overview

    International Nuclear Information System (INIS)

    Glenn, L.A.

    1993-01-01

    The explosion-source region is defined as the region surrounding an underground explosion that cannot be described by elastic or anelastic theory. This region extends typically to ranges up to 1 km/(kt) 1/3 but for some purposes, such as yield estimation via hydrodynamic means (CORRTEX and HYDRO PLUS), the maximum range of interest is less by an order of magnitude. For the simulation or analysis of seismic signals, however, what is required is the time resolved motion and stress state at the inelastic boundary. Various analytic approximations have been made for these boundary conditions, but since they rely on near-field empirical data they cannot be expected to reliably extrapolate to different explosion sites. More important, without some knowledge of the initial energy density and the characteristics of the medium immediately surrounding the explosion, these simplified models are unable to distinguish chemical from nuclear explosions, identify cavity decoupling, or account for such phenomena as anomalous dissipation via pore collapse

  8. Life Span and Resiliency Theory: A Critical Review

    Directory of Open Access Journals (Sweden)

    Alexa Smith-Osborne

    2007-05-01

    Full Text Available Theories of life span development describe human growth and change over the life cycle (Robbins, Chatterjee, & Canda, 2006. Major types of developmental theories include biological, psychodynamic, behavioral, and social learning, cognitive, moral, and spiritual, and those influenced by systems, empowerment, and conflict theory. Life span development theories commonly focus on ontogenesis and sequential mastery of skills, tasks, and abilities. Social work scholars have pointed out that a limitation of life span and other developmental theory is lack of attention to resilience (Greene, 2007; Robbins et al., 1998. The concept of resilience was developed to “describe relative resistance to psychosocial risk experiences” (Rutter, 1999b, p. 119. Longitudinal studies focused on typical and atypical child development informed theory formulation in developmental psychopathology (Garmezy & Rutter, 1983; Luthar, Cichetti,& Becker, 2000 and in an evolving resilience model (Richardson, 2002; Werner & Smith, 1992. Research on resilience has found a positive relationship between a number of individual traits and contextual variables and resistance to a variety of risk factors among children and adolescents. More recently, resilience research has examined the operation of these same factors in the young adult, middle-age, and elder life stages. This article examines the historical and conceptual progression of the two developmental theories—life span and resiliency—and discusses their application to social work practice and education in human behavior in the social environment.

  9. Performance, Career Dynamics, and Span of Control

    DEFF Research Database (Denmark)

    Smeets, Valerie Anne Rolande; Waldman, Michael; Warzynski, Frederic Michel Patrick

    that higher ability managers should supervise more subordinates, or equivalently, have a larger span of control. And although some of this theory’s predictions have been empirically investigated, there has been little systematic investigation of the theory’s predictions concerning span of control....... In this paper we first extend the theoretical literature on the scale-of-operations effect to allow firms’ beliefs concerning a manager’s ability to evolve over the manager’s career, where much of our focus is the determinants of span of control. We then empirically investigate testable predictions from......There is an extensive theoretical literature based on what is called the scale-of-operations effect, i.e., the idea that the return to managerial ability is higher the more resources the manager influences with his or her decisions. This idea leads to various testable predictions including...

  10. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    Science.gov (United States)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  11. Van Kampen Colimits as Bicolimits in Span

    Science.gov (United States)

    Heindel, Tobias; Sobociński, Paweł

    The exactness properties of coproducts in extensive categories and pushouts along monos in adhesive categories have found various applications in theoretical computer science, e.g. in program semantics, data type theory and rewriting. We show that these properties can be understood as a single universal property in the associated bicategory of spans. To this end, we first provide a general notion of Van Kampen cocone that specialises to the above colimits. The main result states that Van Kampen cocones can be characterised as exactly those diagrams in ℂ that induce bicolimit diagrams in the bicategory of spans mathcal{S}pan_{mathbb{C}}, provided that ℂ has pullbacks and enough colimits.

  12. Spanning organizational boundaries to manage creative processes:

    DEFF Research Database (Denmark)

    Andersen, Poul Houman; Kragh, Hanne; Lettl, Christopher

    2013-01-01

    In order to continue to be innovative in the current fast-paced and competitive environment, organizations are increasingly dependent on creative inputs developed outside their boundaries. The paper addresses the boundary spanning activities that managers undertake to a) select and mobilize...... creative talent, b) create shared identity, and c) combine and integrate knowledge in innovation projects involving external actors. We study boundary spanning activities in two creative projects in the LEGO group. One involves identifying and integrating deep, specialized knowledge, the other focuses...... actors, and how knowledge is integrated across organizational boundaries. We discuss implications of our findings for managers and researchers in a business-to-business context...

  13. Versatile Markovian models for networks with asymmetric TCP sources

    NARCIS (Netherlands)

    van Foreest, N.D.; Haverkort, Boudewijn R.H.M.; Mandjes, M.R.H.; Scheinhardt, Willem R.W.

    2004-01-01

    In this paper we use Stochastic Petri Nets (SPNs) to study the interaction of multiple TCP sources that share one or two buffers, thereby considerably extending earlier work. We first consider two sources sharing a buffer and investigate the consequences of two popular assumptions for the loss

  14. A discriminative syntactic model for source permutation via tree transduction

    NARCIS (Netherlands)

    Khalilov, M.; Sima'an, K.; Wu, D.

    2010-01-01

    A major challenge in statistical machine translation is mitigating the word order differences between source and target strings. While reordering and lexical translation choices are often conducted in tandem, source string permutation prior to translation is attractive for studying reordering using

  15. Individual differences in memory span: the contribution of rehearsal, access to lexical memory, and output speed.

    Science.gov (United States)

    Tehan, G; Lalor, D M

    2000-11-01

    Rehearsal speed has traditionally been seen to be the prime determinant of individual differences in memory span. Recent studies, in the main using young children as the subject population, have suggested other contributors to span performance, notably contributions from long-term memory and forgetting and retrieval processes occurring during recall. In the current research we explore individual differences in span with respect to measures of rehearsal, output time, and access to lexical memory. We replicate standard short-term phenomena; we show that the variables that influence children's span performance influence adult performance in the same way; and we show that lexical memory access appears to be a more potent source of individual differences in span than either rehearsal speed or output factors.

  16. Muticriteria decision making model for chosing between open source and non-open source software

    Directory of Open Access Journals (Sweden)

    Edmilson Alves de Moraes

    2008-09-01

    Full Text Available This article proposes the use of a multicriterio method for supporting decision on a problem where the intent is to chose for software given the options of open source and not-open source. The study shows how a method for decison making can be used to provide problem structuration and simplify the decision maker job. The method Analytic Hierarchy Process-AHP is described step-by-step and its benefits and flaws are discussed. Followin the theoretical discussion, a muliple case study is presented, where two companies are to use the decison making method. The analysis was supported by Expert Choice, a software developed based on AHP framework.

  17. Laboratory Plasma Source as an MHD Model for Astrophysical Jets

    Science.gov (United States)

    Mayo, Robert M.

    1997-01-01

    The significance of the work described herein lies in the demonstration of Magnetized Coaxial Plasma Gun (MCG) devices like CPS-1 to produce energetic laboratory magneto-flows with embedded magnetic fields that can be used as a simulation tool to study flow interaction dynamic of jet flows, to demonstrate the magnetic acceleration and collimation of flows with primarily toroidal fields, and study cross field transport in turbulent accreting flows. Since plasma produced in MCG devices have magnetic topology and MHD flow regime similarity to stellar and extragalactic jets, we expect that careful investigation of these flows in the laboratory will reveal fundamental physical mechanisms influencing astrophysical flows. Discussion in the next section (sec.2) focuses on recent results describing collimation, leading flow surface interaction layers, and turbulent accretion. The primary objectives for a new three year effort would involve the development and deployment of novel electrostatic, magnetic, and visible plasma diagnostic techniques to measure plasma and flow parameters of the CPS-1 device in the flow chamber downstream of the plasma source to study, (1) mass ejection, morphology, and collimation and stability of energetic outflows, (2) the effects of external magnetization on collimation and stability, (3) the interaction of such flows with background neutral gas, the generation of visible emission in such interaction, and effect of neutral clouds on jet flow dynamics, and (4) the cross magnetic field transport of turbulent accreting flows. The applicability of existing laboratory plasma facilities to the study of stellar and extragalactic plasma should be exploited to elucidate underlying physical mechanisms that cannot be ascertained though astrophysical observation, and provide baseline to a wide variety of proposed models, MHD and otherwise. The work proposed herin represents a continued effort on a novel approach in relating laboratory experiments to

  18. Near Source 2007 Peru Tsunami Runup Observations and Modeling

    Science.gov (United States)

    Borrero, J. C.; Fritz, H. M.; Kalligeris, N.; Broncano, P.; Ortega, E.

    2008-12-01

    On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to 10 m. A reconnaissance team was deployed two weeks after the event and investigated the tsunami effects at 51 sites. Three tsunami fatalities were reported south of the Paracas Peninsula in a sparsely populated desert area where the largest tsunami runup heights and massive inundation distances up to 2 km were measured. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. As with all near field tsunamis, the waves struck within minutes of the massive ground shaking. Spontaneous evacuations coordinated by the Peruvian Coast Guard minimized the fatalities and illustrate the importance of community-based education and awareness programs. The residents of the fishing village Lagunilla were unaware of the tsunami hazard after an earthquake and did not evacuate, which resulted in 3 fatalities. Despite the relatively benign tsunami effects at Pisco from this event, the tsunami hazard for this city (and its liquefied natural gas terminal) cannot be underestimated. Between 1687 and 1868, the city of Pisco was destroyed 4 times by tsunami waves. Since then, two events (1974 and 2007) have resulted in partial inundation and moderate damage. The fact that potentially devastating tsunami runup heights were observed immediately south of the peninsula only serves to underscore this point.

  19. A Systems Thinking Model for Open Source Software Development in Social Media

    OpenAIRE

    Mustaquim, Moyen

    2010-01-01

    In this paper a social media model, based on systems thinking methodology is proposed to understand the behavior of the open source software development community working in social media.The proposed model is focused on relational influences of two different systems- social media and the open source community. This model can be useful for taking decisions which are complicated and where solutions are not apparent.Based on the proposed model, an efficient way of working in open source developm...

  20. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    International Nuclear Information System (INIS)

    Yan Guanghua; Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G

    2008-01-01

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity

  1. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Yan Guanghua [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)

    2008-04-21

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity.

  2. Working memory and inhibitory control across the life span: Intrusion errors in the Reading Span Test.

    Science.gov (United States)

    Robert, Christelle; Borella, Erika; Fagot, Delphine; Lecerf, Thierry; de Ribaupierre, Anik

    2009-04-01

    The aim of this study was to examine to what extent inhibitory control and working memory capacity are related across the life span. Intrusion errors committed by children and younger and older adults were investigated in two versions of the Reading Span Test. In Experiment 1, a mixed Reading Span Test with items of various list lengths was administered. Older adults and children recalled fewer correct words and produced more intrusions than did young adults. Also, age-related differences were found in the type of intrusions committed. In Experiment 2, an adaptive Reading Span Test was administered, in which the list length of items was adapted to each individual's working memory capacity. Age groups differed neither on correct recall nor on the rate of intrusions, but they differed on the type of intrusions. Altogether, these findings indicate that the availability of attentional resources influences the efficiency of inhibition across the life span.

  3. Spanning the Home/Work Creative Space

    DEFF Research Database (Denmark)

    Davis, Lee N.; Davis, Jerome; Hoisl, Karin

    the employee brings to work. Based on Woodman et al.’s (1993) “interactionist perspective” on organizational creativity, supplemented by literature on search and knowledge re/combination, we explore whether and how leisure time activities can span the creative space between the employee’s home and workplace...

  4. Faster Fully-Dynamic minimum spanning forest

    DEFF Research Database (Denmark)

    Holm, Jacob; Rotenberg, Eva; Wulff-Nilsen, Christian

    2015-01-01

    We give a new data structure for the fully-dynamic minimum spanning forest problem in simple graphs. Edge updates are supported in O(log4 n/log logn) expected amortized time per operation, improving the O(log4 n) amortized bound of Holm et al. (STOC’98, JACM’01).We also provide a deterministic data...

  5. Source apportionment of airborne particulates through receptor modeling: Indian scenario

    Science.gov (United States)

    Banerjee, Tirthankar; Murari, Vishnu; Kumar, Manish; Raju, M. P.

    2015-10-01

    Airborne particulate chemistry mostly governed by associated sources and apportionment of specific sources is extremely essential to delineate explicit control strategies. The present submission initially deals with the publications (1980s-2010s) of Indian origin which report regional heterogeneities of particulate concentrations with reference to associated species. Such meta-analyses clearly indicate the presence of reservoir of both primary and secondary aerosols in different geographical regions. Further, identification of specific signatory molecules for individual source category was also evaluated in terms of their scientific merit and repeatability. Source signatures mostly resemble international profile while, in selected cases lack appropriateness. In India, source apportionment (SA) of airborne particulates was initiated way back in 1985 through factor analysis, however, principal component analysis (PCA) shares a major proportion of applications (34%) followed by enrichment factor (EF, 27%), chemical mass balance (CMB, 15%) and positive matrix factorization (PMF, 9%). Mainstream SA analyses identify earth crust and road dust resuspensions (traced by Al, Ca, Fe, Na and Mg) as a principal source (6-73%) followed by vehicular emissions (traced by Fe, Cu, Pb, Cr, Ni, Mn, Ba and Zn; 5-65%), industrial emissions (traced by Co, Cr, Zn, V, Ni, Mn, Cd; 0-60%), fuel combustion (traced by K, NH4+, SO4-, As, Te, S, Mn; 4-42%), marine aerosols (traced by Na, Mg, K; 0-15%) and biomass/refuse burning (traced by Cd, V, K, Cr, As, TC, Na, K, NH4+, NO3-, OC; 1-42%). In most of the cases, temporal variations of individual source contribution for a specific geographic region exhibit radical heterogeneity possibly due to unscientific orientation of individual tracers for specific source and well exaggerated by methodological weakness, inappropriate sample size, implications of secondary aerosols and inadequate emission inventories. Conclusively, a number of challenging

  6. Relations between Preschool Attention Span-Persistence and Age 25 Educational Outcomes

    Science.gov (United States)

    McClelland, Megan M.; Acock, Alan C.; Piccinin, Andrea; Rhea, Sally Ann; Stallings, Michael C.

    2013-01-01

    This study examined relations between children's attention span-persistence in preschool and later school achievement and college completion. Children were drawn from the Colorado Adoption Project using adopted and non-adopted children (N = 430). Results of structural equation modeling indicated that children's age 4 attention span-persistence…

  7. STUDY THE CHARACTERISTICS OF SMALL AND VERY SMALL SPAN WINGS, USED ON SHIPS

    Directory of Open Access Journals (Sweden)

    Beazit ALI

    2011-07-01

    Full Text Available This scientific work presents the way in which the small, and very small span wings can be obtainedstarting from the great span wings and using the two scales of the similarity theory. Basing on two scales modelit can transcribe from model at nature the coefficients x c , y c and lengthening λ of Gottingen - 612 profile.

  8. Studies and modeling of cold neutron sources; Etude et modelisation des sources froides de neutron

    Energy Technology Data Exchange (ETDEWEB)

    Campioni, G

    2004-11-15

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources.

  9. A dwarf mouse model with decreased GH/IGF-1 activity that does not experience life-span extension: potential impact of increased adiposity, leptin, and insulin with advancing age.

    Science.gov (United States)

    Berryman, Darlene E; Lubbers, Ellen R; Magon, Vishakha; List, Edward O; Kopchick, John J

    2014-02-01

    Reduced growth hormone (GH) action is associated with extended longevity in many vertebrate species. GH receptor (GHR) null (GHR(-)(/-)) mice, which have a disruption in the GHR gene, are a well-studied example of mice that are insulin sensitive and long lived yet obese. However, unlike other mouse lines with reduced GH action, GH receptor antagonist (GHA) transgenic mice have reduced GH action yet exhibit a normal, not extended, life span. Understanding why GHA mice do not have extended life span though they share many physiological attributes with GHR(-)(/-) mice will help provide clues about how GH influences aging. For this study, we examined age- and sex-related changes in body composition, glucose homeostasis, circulating adipokines, and tissue weights in GHA mice and littermate controls. Compared with previous studies with GHR(-)(/-) mice, GHA mice had more significant increases in fat mass with advancing age. The increased obesity resulted in significant adipokine changes. Euglycemia was maintained in GHA mice; however, hyperinsulinemia developed in older male GHA mice. Overall, GHA mice experience a more substantial, generalized obesity accompanied by altered adipokine levels and glucose homeostasis than GHR(-)(/-) mice, which becomes more exaggerated with advancing age and which likely contributes to the lack of life-span extension in these mice.

  10. RANS modeling of scalar dispersion from localized sources within a simplified urban-area model

    Science.gov (United States)

    Rossi, Riccardo; Capra, Stefano; Iaccarino, Gianluca

    2011-11-01

    The dispersion of a passive scalar downstream a localized source within a simplified urban-like geometry is examined by means of RANS scalar flux models. The computations are conducted under conditions of neutral stability and for three different incoming wind directions (0°, 45°, 90°) at a roughness Reynolds number of Ret = 391. A Reynolds stress transport model is used to close the flow governing equations whereas both the standard eddy-diffusivity closure and algebraic flux models are employed to close the transport equation for the passive scalar. The comparison with a DNS database shows improved reliability from algebraic scalar flux models towards predicting both the mean concentration and the plume structure. Since algebraic flux models do not increase substantially the computational effort, the results indicate that the use of tensorial-diffusivity can be promising tool for dispersion simulations for the urban environment.

  11. Magnox fuel inventories. Experiment and calculation using a point source model

    International Nuclear Information System (INIS)

    Nair, S.

    1978-08-01

    The results of calculations of Magnox fuel inventories using the point source code RICE and associated Magnox reactor data set have been compared with experimental measurements for the actinide isotopes 234 , 235 , 236 , 238 U, 238 , 239 , 240 , 241 , 242 Pu, 241 , 243 Am and 242 , 244 Cm and the fission product isotopes 142 , 143 , 144 , 145 , 146 , 150 Nd, 95 Zr, 134 , 137 Cs, 144 Ce and daughter 144 Pr produced in four samples of spent Magnox fuel spanning the burnup range 3000 to 9000 MWd/Te. The neutron emissions from a further two samples were also measured and compared with RICE predictions. The results of the comparison were such as to justify the use of the code RICE for providing source terms for environmental impact studies, for the isotopes considered in the present work. (author)

  12. Modelling of novel light sources based on asymmetric heterostructures

    International Nuclear Information System (INIS)

    Afonenko, A.A.; Kononenko, V.K.; Manak, I.S.

    1995-01-01

    For asymmetric quantum-well heterojunction laser sources, processes of carrier injection into quantum wells are considered. In contrast to ordinary quantum-well light sources, active layers in the novel nanocrystalline systems have different thickness and/or compositions. In addition, wide-band gap barrier layers separating the quantum wells may have a linear or parabolic energy potential profile. For various kinds of the structures, mathematical simulation of dynamic response has been carried out. (author). 8 refs, 5 figs

  13. Source apportionment of fine particulate matter in China in 2013 using a source-oriented chemical transport model.

    Science.gov (United States)

    Shi, Zhihao; Li, Jingyi; Huang, Lin; Wang, Peng; Wu, Li; Ying, Qi; Zhang, Hongliang; Lu, Li; Liu, Xuejun; Liao, Hong; Hu, Jianlin

    2017-12-01

    China has been suffering high levels of fine particulate matter (PM 2.5 ). Designing effective PM 2.5 control strategies requires information about the contributions of different sources. In this study, a source-oriented Community Multiscale Air Quality (CMAQ) model was applied to quantitatively estimate the contributions of different source sectors to PM 2.5 in China. Emissions of primary PM 2.5 and gas pollutants of SO 2 , NO x , and NH 3 , which are precursors of particulate sulfate, nitrate, and ammonium (SNA, major PM 2.5 components in China), from eight source categories (power plants, residential sources, industries, transportation, open burning, sea salt, windblown dust and agriculture) were separately tracked to determine their contributions to PM 2.5 in 2013. Industrial sector is the largest source of SNA in Beijing, Xi'an and Chongqing, followed by agriculture and power plants. Residential emissions are also important sources of SNA, especially in winter when severe pollution events often occur. Nationally, the contributions of different source sectors to annual total PM 2.5 from high to low are industries, residential sources, agriculture, power plants, transportation, windblown dust, open burning and sea salt. Provincially, residential sources and industries are the major anthropogenic sources of primary PM 2.5 , while industries, agriculture, power plants and transportation are important for SNA in most provinces. For total PM 2.5 , residential and industrial emissions are the top two sources, with a combined contribution of 40-50% in most provinces. The contributions of power plants and agriculture to total PM 2.5 are about 10%, respectively. Secondary organic aerosol accounts for about 10% of annual PM 2.5 in most provinces, with higher contributions in southern provinces such as Yunnan (26%), Hainan (25%) and Taiwan (21%). Windblown dust is an important source in western provinces such as Xizang (55% of total PM 2.5 ), Qinghai (74%), Xinjiang (59

  14. Source apportionment of PM2.5 in North India using source-oriented air quality models

    International Nuclear Information System (INIS)

    Guo, Hao; Kota, Sri Harsha; Sahu, Shovan Kumar; Hu, Jianlin; Ying, Qi; Gao, Aifang; Zhang, Hongliang

    2017-01-01

    In recent years, severe pollution events were observed frequently in India especially at its capital, New Delhi. However, limited studies have been conducted to understand the sources to high pollutant concentrations for designing effective control strategies. In this work, source-oriented versions of the Community Multi-scale Air Quality (CMAQ) model with Emissions Database for Global Atmospheric Research (EDGAR) were applied to quantify the contributions of eight source types (energy, industry, residential, on-road, off-road, agriculture, open burning and dust) to fine particulate matter (PM 2.5 ) and its components including primary PM (PPM) and secondary inorganic aerosol (SIA) i.e. sulfate, nitrate and ammonium ions, in Delhi and three surrounding cities, Chandigarh, Lucknow and Jaipur in 2015. PPM mass is dominated by industry and residential activities (>60%). Energy (∼39%) and industry (∼45%) sectors contribute significantly to PPM at south of Delhi, which reach a maximum of 200 μg/m 3 during winter. Unlike PPM, SIA concentrations from different sources are more heterogeneous. High SIA concentrations (∼25 μg/m 3 ) at south Delhi and central Uttar Pradesh were mainly attributed to energy, industry and residential sectors. Agriculture is more important for SIA than PPM and contributions of on-road and open burning to SIA are also higher than to PPM. Residential sector contributes highest to total PM 2.5 (∼80 μg/m 3 ), followed by industry (∼70 μg/m 3 ) in North India. Energy and agriculture contribute ∼25 μg/m 3 and ∼16 μg/m 3 to total PM 2.5 , while SOA contributes <5 μg/m 3 . In Delhi, industry and residential activities contribute to 80% of total PM 2.5 . - Highlights: • Sources of PM 2.5 in North India were quantified by source-oriented CMAQ. • Industrial/residential activities are the dominating sources (60–70%) for PPM. • Energy/agriculture are the most important sources (30–40%) for SIA. • Strong seasonal

  15. The 2016-2017 Central Italy Seismic Sequence: Source Complexity Inferred from Rupture Models.

    Science.gov (United States)

    Scognamiglio, L.; Tinti, E.; Casarotti, E.; Pucci, S.; Villani, F.; Cocco, M.; Magnoni, F.; Michelini, A.

    2017-12-01

    The Apennines have been struck by several seismic sequences in recent years, showing evidence of the activation of multiple segments of normal fault systems in a variable and, relatively short, time span, as in the case of the 1980 Irpinia earthquake (three shocks in 40 s), the 1997 Umbria-Marche sequence (four main shocks in 18 days) and the 2009 L'Aquila earthquake having three segments activated within a few weeks. The 2016-2017 central Apennines seismic sequence begin on August 24th with a MW 6.0 earthquake, which strike the region between Amatrice and Accumoli causing 299 fatalities. This earthquake ruptures a nearly 20 km long normal fault and shows a quite heterogeneous slip distribution. On October 26th, another main shock (MW 5.9) occurs near Visso extending the activated seismogenic area toward the NW. It is a double event rupturing contiguous patches on the fault segment of the normal fault system. Four days after the second main shock, on October 30th, a third earthquake (MW 6.5) occurs near Norcia, roughly midway between Accumoli and Visso. In this work we have inverted strong motion waveforms and GPS data to retrieve the source model of the MW 6.5 event with the aim of interpreting the rupture process in the framework of this complex sequence of moderate magnitude earthquakes. We noted that some preliminary attempts to model the slip distribution of the October 30th main shock using a single fault plane oriented along the Apennines did not provide convincing fits to the observed waveforms. In addition, the deformation pattern inferred from satellite observations suggested the activation of a multi-fault structure, that is coherent to the complexity and the extension of the geological surface deformation. We investigated the role of multi-fault ruptures and we found that this event revealed an extraordinary complexity of the rupture geometry and evolution: the coseismic rupture propagated almost simultaneously on a normal fault and on a blind fault

  16. Water Quality Assessment of River Soan (Pakistan) and Source Apportionment of Pollution Sources Through Receptor Modeling.

    Science.gov (United States)

    Nazeer, Summya; Ali, Zeshan; Malik, Riffat Naseem

    2016-07-01

    The present study was designed to determine the spatiotemporal patterns in water quality of River Soan using multivariate statistics. A total of 26 sites were surveyed along River Soan and its associated tributaries during pre- and post-monsoon seasons in 2008. Hierarchical agglomerative cluster analysis (HACA) classified sampling sites into three groups according to their degree of pollution, which ranged from least to high degradation of water quality. Discriminant function analysis (DFA) revealed that alkalinity, orthophosphates, nitrates, ammonia, salinity, and Cd were variables that significantly discriminate among three groups identified by HACA. Temporal trends as identified through DFA revealed that COD, DO, pH, Cu, Cd, and Cr could be attributed for major seasonal variations in water quality. PCA/FA identified six factors as potential sources of pollution of River Soan. Absolute principal component scores using multiple regression method (APCS-MLR) further explained the percent contribution from each source. Heavy metals were largely added through industrial activities (28 %) and sewage waste (28 %), nutrients through agriculture runoff (35 %) and sewage waste (28 %), organic pollution through sewage waste (27 %) and urban runoff (17 %) and macroelements through urban runoff (39 %), and mineralization and sewage waste (30 %). The present study showed that anthropogenic activities are the major source of variations in River Soan. In order to address the water quality issues, implementation of effective waste management measures are needed.

  17. EPIC Forest LAI Dataset: LAI estimates generated from the USDA Environmental Policy Impact Climate (EPIC) model (a widely used, field-scale, biogeochemical model) on four forest complexes spanning three physiographic provinces in VA and NC.

    Data.gov (United States)

    U.S. Environmental Protection Agency — This data depicts calculated and validated LAI estimates generated from the USDA Environmental Policy Impact Climate (EPIC) model (a widely used, field-scale,...

  18. eTOXlab, an open source modeling framework for implementing predictive models in production environments.

    Science.gov (United States)

    Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel

    2015-01-01

    Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by e

  19. Modelling [CAS - CERN Accelerator School, Ion Sources, Senec (Slovakia), 29 May - 8 June 2012

    International Nuclear Information System (INIS)

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H - -sources) together with some remarks on beam transport. (author)

  20. Orion: a glimpse of hope in life span extension?

    Science.gov (United States)

    Muradian, K; Bondar, V; Bezrukov, V; Zhukovsky, O; Polyakov, V; Utko, N

    2010-01-01

    Orion is a multicomponent drug based on derivatives of taurocholic acid and several other compounds. Application of Orion into the feeding medium of Drosophila melanogaster resulted in increased life span and survival at stressful conditions. Two paradoxical features of the drug should be stressed: The "age-threshold" (life span extension was observed only when the drug was applied starting from the second half of life) and induction of "centenarian" flies (older 100 days). Orion enhanced survival at heat shock (38 degrees C) and acidic (pH = 1.6) or alkaline (pH = 11.8) feeding mediums, but not at oxidative stresses modeled by 100% oxygen or application of hydrogen peroxide (H(2)O(2)).

  1. Monte Carlo model for a thick target T(D,n)4 He neutron source

    International Nuclear Information System (INIS)

    Webster, W.M.

    1976-01-01

    A brief description is given of a calculational model developed to simulate a T(D,n) 4 He neutron source which is anisotropic in energy and intensity. The model also provides a means for including the time dependency of the neutron source. Although the model has been applied specifically to the Lawrence Livermore Laboratory ICT accelerator, the technique is general and can be applied to any similar neutron source

  2. A 1D ion species model for an RF driven negative ion source

    Science.gov (United States)

    Turner, I.; Holmes, A. J. T.

    2017-08-01

    A one-dimensional model for an RF driven negative ion source has been developed based on an inductive discharge. The RF source differs from traditional filament and arc ion sources because there are no primary electrons present, and is simply composed of an antenna region (driver) and a main plasma discharge region. However the model does still make use of the classical plasma transport equations for particle energy and flow, which have previously worked well for modelling DC driven sources. The model has been developed primarily to model the Small Negative Ion Facility (SNIF) ion source at CCFE, but may be easily adapted to model other RF sources. Currently the model considers the hydrogen ion species, and provides a detailed description of the plasma parameters along the source axis, i.e. plasma temperature, density and potential, as well as current densities and species fluxes. The inputs to the model are currently the RF power, the magnetic filter field and the source gas pressure. Results from the model are presented and where possible compared to existing experimental data from SNIF, with varying RF power, source pressure.

  3. Minimal spanning trees, filaments and galaxy clustering

    International Nuclear Information System (INIS)

    Barrow, J.D.; Sonoda, D.H.

    1985-01-01

    A graph theoretical technique for assessing intrinsic patterns in point data sets is described. A unique construction, the minimal spanning tree, can be associated with any point data set given all the inter-point separations. This construction enables the skeletal pattern of galaxy clustering to be singled out in quantitative fashion and differs from other statistics applied to these data sets. This technique is described and applied to two- and three-dimensional distributions of galaxies and also to comparable random samples and numerical simulations. The observed CfA and Zwicky data exhibit characteristic distributions of edge-lengths in their minimal spanning trees which are distinct from those found in random samples. (author)

  4. Interorganizational Boundary Spanning in Global Software Development

    DEFF Research Database (Denmark)

    Søderberg, Anne-Marie; Romani, Laurence

    , and which skills and competencies they draw on in their efforts to deal with emerging cross-cultural issues in a way that paves ground for developing a shared understanding and common platform for the client and vendor representatives. A framework of boundary spanning leadership practices is adapted...... to virtuality and cultural diversity. This paper, which draws on a case study of collaborative work in a global software development project, focuses on key boundary spanners in an Indian vendor company, who are responsible for developing trustful and sustainable client relations and coordinating complex...... projects across multiple cultures, languages, organisational boundaries, time zones and geographical distances. It looks into how these vendor managers get prepared for their complex boundary spanning work, which cross-cultural challenges they experience in their collaboration with Western clients...

  5. Characteristics and Source Apportionment of Marine Aerosols over East China Sea Using a Source-oriented Chemical Transport Model

    Science.gov (United States)

    Kang, M.; Zhang, H.; Fu, P.

    2017-12-01

    Marine aerosols exert a strong influence on global climate change and biogeochemical cycling, as oceans cover beyond 70% of the Earth's surface. However, investigations on marine aerosols are relatively limited at present due to the difficulty and inconvenience in sampling marine aerosols as well as their diverse sources. East China Sea (ECS), lying over the broad shelf of the western North Pacific, is adjacent to the Asian mainland, where continental-scale air pollution could impose a heavy load on the marine atmosphere through long-range atmospheric transport. Thus, contributions of major sources to marine aerosols need to be identified for policy makers to develop cost effective control strategies. In this work, a source-oriented version of the Community Multiscale Air Quality (CMAQ) model, which can directly track the contributions from multiple emission sources to marine aerosols, is used to investigate the contributions from power, industry, transportation, residential, biogenic and biomass burning to marine aerosols over the ECS in May and June 2014. The model simulations indicate significant spatial and temporal variations of concentrations as well as the source contributions. This study demonstrates that the Asian continent can greatly affect the marine atmosphere through long-range transport.

  6. 'Localised creativity: a life span perspective'

    OpenAIRE

    Worth, Piers J.

    2000-01-01

    This thesis is based around a biographic study of the lives of 40 individuals (24 men and 16 women) with a reputation for creative work in a localised context (such as an organisation). The study examines life span development patterns from birth to middle age (45 - 60 years of age) with data gained by biographic interview and thematic analysis. Participants selected for this study are creative in that they have a reputation for producing new, novel and useful or appropriate contributions in ...

  7. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... The research identified logistic regression as a powerful tool for analysis of DMSMS and further developed twenty models attempting to identify the "best" way to model and predict DMSMS using logistic regression...

  8. Spanning trees and the Eurozone crisis

    Science.gov (United States)

    Dias, João

    2013-12-01

    The sovereign debt crisis in the euro area has not yet been solved and recent developments in Spain and Italy have further deteriorated the situation. In this paper we develop a new approach to analyze the ongoing Eurozone crisis. Firstly, we use Maximum Spanning Trees to analyze the topological properties of government bond rates’ dynamics. Secondly, we combine the information given by both Maximum and Minimum Spanning Trees to obtain a measure of market dissimilarity or disintegration. Thirdly, we extend this measure to include a convenient distance not limited to the interval [0, 2]. Our empirical results show that Maximum Spanning Tree gives an adequate description of the separation of the euro area into two distinct groups: those countries strongly affected by the crisis and those that have remained resilient during this period. The measures of market dissimilarity also reveal a persistent separation of these two groups and, according to our second measure, this separation strongly increased during the period July 2009-March 2012.

  9. Vision in Flies: Measuring the Attention Span.

    Science.gov (United States)

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA). The animal is said to have a focus of attention (FoA) which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s.

  10. Vision in Flies: Measuring the Attention Span.

    Directory of Open Access Journals (Sweden)

    Sebastian Koenig

    Full Text Available A visual stimulus at a particular location of the visual field may elicit a behavior while at the same time equally salient stimuli in other parts do not. This property of visual systems is known as selective visual attention (SVA. The animal is said to have a focus of attention (FoA which it has shifted to a particular location. Visual attention normally involves an attention span at the location to which the FoA has been shifted. Here the attention span is measured in Drosophila. The fly is tethered and hence has its eyes fixed in space. It can shift its FoA internally. This shift is revealed using two simultaneous test stimuli with characteristic responses at their particular locations. In tethered flight a wild type fly keeps its FoA at a certain location for up to 4s. Flies with a mutation in the radish gene, that has been suggested to be involved in attention-like mechanisms, display a reduced attention span of only 1s.

  11. Modeling a point-source release of 1,1,1-trichloroethane using EPA's SCREEN model

    International Nuclear Information System (INIS)

    Henriques, W.D.; Dixon, K.R.

    1994-01-01

    Using data from the Environmental Protection Agency's Toxic Release Inventory 1988 (EPA TRI88), pollutant concentration estimates were modeled for a point source air release of 1,1,1-trichloroethane at the Savannah River Plant located in Aiken, South Carolina. Estimates were calculating using the EPA's SCREEN model utilizing typical meteorological conditions to determine maximum impact of the plume under different mixing conditions for locations within 100 meters of the stack. Input data for the SCREEN model were then manipulated to simulate the impact of the release under urban conditions (for the purpose of assessing future landuse considerations) and under flare release options to determine if these parameters lessen or increase the probability of human or wildlife exposure to significant concentrations. The results were then compared to EPA reference concentrations (RfC) in order to assess the size of the buffer around the stack which may potentially have levels that exceed this level of safety

  12. Open Source Software Success Model for Iran: End-User Satisfaction Viewpoint

    Directory of Open Access Journals (Sweden)

    Ali Niknafs

    2012-03-01

    Full Text Available The open source software development is notable option for software companies. Recent years, many advantages of this software type are cause of move to that in Iran. National security and international restrictions problems and also software and services costs and more other problems intensified importance of use of this software. Users and their viewpoints are the critical success factor in the software plans. But there is not an appropriate model for open source software case in Iran. This research tried to develop a measuring open source software success model for Iran. By use of data gathered from open source users and online survey the model was tested. The results showed that components by positive effect on open source success were user satisfaction, open source community services quality, open source quality, copyright and security.

  13. Power-law thermal model for blackbody sources

    International Nuclear Information System (INIS)

    Del Grande, N.K.

    1979-01-01

    The spectral radiant emittance W/sub E/ from a blackbody at a temperature kT for photons at energies E above the spectral peak (2.82144 kT) varies as (kT)/sup E/kT/. This power-law temperature dependence, an approximation of Planck's radiation law, may have applications for measuring the emissivity of sources emitting in the soft x-ray region

  14. Outer heliospheric radio emissions. II - Foreshock source models

    Science.gov (United States)

    Cairns, Iver H.; Kurth, William S.; Gurnett, Donald A.

    1992-01-01

    Observations of LF radio emissions in the range 2-3 kHz by the Voyager spacecraft during the intervals 1983-1987 and 1989 to the present while at heliocentric distances greater than 11 AU are reported. New analyses of the wave data are presented, and the characteristics of the radiation are reviewed and discussed. Two classes of events are distinguished: transient events with varying starting frequencies that drift upward in frequency and a relatively continuous component that remains near 2 kHz. Evidence for multiple transient sources and for extension of the 2-kHz component above the 2.4-kHz interference signal is presented. The transient emissions are interpreted in terms of radiation generated at multiples of the plasma frequency when solar wind density enhancements enter one or more regions of a foreshock sunward of the inner heliospheric shock. Solar wind density enhancements by factors of 4-10 are observed. Propagation effects, the number of radiation sources, and the time variability, frequency drift, and varying starting frequencies of the transient events are discussed in terms of foreshock sources.

  15. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for

  16. [The evolution of plant life span: facts and hypotheses].

    Science.gov (United States)

    2006-01-01

    There are two different views on the evolution of life forms in Cormophyta: from woody plants to herbaceous ones or in opposite direction - from herbs to trees. In accordance with these views it is supposed that life span in plants changed in the course of evolution from many years (perennials) to few years (annuals, biennials), or went in reverse - from few years to many years. The author discusses the problems of senescence and longevity in Cormophyta in the context of various hypotheses of ageing (programmed death theory, mutation accumulation, antagonistic pleiotropy, disposable soma, genes of ageing, genes of longevity). Special attention is given to bio-morphological aspects of longevity and cases of non-ageing plants ("negative senescence", "potential immortality"). It is proposed to distinguish seven models of simple ontogenesis in Cormophyta that can exemplify the diversity of mechanisms of ageing and longevity. The evolution of life span in plants is considered as an indirect result of natural selection of other characteristics of organisms or as a consequence of fixation of modifications (episelectional evolution). It seems that short life span could emerge several times during evolution of one group of plants, thus favoring its adaptive radiation.

  17. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.; Jonsson, Sigurjon; Sudhaus, H.; Baumann, C.

    2012-01-01

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due

  18. Coagulation of aerosols population in external mixture: modeling and experiments/Modelling of a population of aerosol multi-sources and research for contributions of every source in the urban scale with the model of dispersion CHIMERE

    International Nuclear Information System (INIS)

    Dergaoui, Hilel

    2012-01-01

    This thesis has been launched at the instigation of INERIS in order to bring some answers to several issues about environmental and health impact of the particle pollution. Indeed, the growing concern of public exposure at urban scale to atmospheric particles and the gradual setting-up of emission reduction policies (particles and their gaseous precursors) make more and more necessary to apportion the various sources contributing to ambient particle concentrations and to quantify these contributions. Due to the highly complex relationships between emissions and measured concentrations, chemical transport models which simulate advection, diffusion and the physico-chemical transformations undergone by pollutants in atmosphere, have to be used. Particles are still a hard modeling task, due to their multiple sizes, chemical compositions and emission sources (including their gaseous precursors). Most chemical transport models uses a simplified mathematical representation for atmospheric aerosols. Their size distribution is either represented by several log-normal distributions, or discretized in several sections, whose mean diameters span from a few nanometers to tens of micrometers. Within each size class, particles are usually assumed to be well mixed, i.e. they all have the same composition, which is named internal mixing. However, in reality and close to emission sources, the particle population may have several distinct chemical compositions for one given size class, due to the fact that sources emit particles with very different chemical compositions (e.g. traffic, heating, industries, vegetation), which refers to external mixing. Thus, the internal mixing assumption comes to neglect the mixing time between particles of different sources, which may entail significant errors in the computation of exposure and of their physico-chemical properties, some of whom, like radiative effect, are precisely above all sensitive to chemical composition. In this framework, the

  19. The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.

    2004-01-01

    The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...

  20. [Source apportionment of soil heavy metals in Jiapigou goldmine based on the UNMIX model].

    Science.gov (United States)

    Ai, Jian-chao; Wang, Ning; Yang, Jing

    2014-09-01

    The paper determines 16 kinds of metal elements' concentration in soil samples which collected in Jipigou goldmine upper the Songhua River. The UNMIX Model which was recommended by US EPA to get the source apportionment results was applied in this study, Cd, Hg, Pb and Ag concentration contour maps were generated by using Kriging interpolation method to verify the results. The main conclusions of this study are: (1)the concentrations of Cd, Hg, Pb and Ag exceeded Jilin Province soil background values and enriched obviously in soil samples; (2)using the UNMIX Model resolved four pollution sources: source 1 represents human activities of transportation, ore mining and garbage, and the source 1's contribution is 39. 1% ; Source 2 represents the contribution of the weathering of rocks and biological effects, and the source 2's contribution is 13. 87% ; Source 3 is a comprehensive source of soil parent material and chemical fertilizer, and the source 3's contribution is 23. 93% ; Source 4 represents iron ore mining and transportation sources, and the source 4's contribution is 22. 89%. (3)the UNMIX Model results are in accordance with the survey of local land-use types, human activities and Cd, Hg and Pb content distributions.

  1. Endangered Butterflies as a Model System for Managing Source Sink Dynamics on Department of Defense Lands

    Science.gov (United States)

    used three species of endangered butterflies as a model system to rigorously investigate the source-sink dynamics of species being managed on military...lands. Butterflies have numerous advantages as models for source-sink dynamics , including rapid generation times and relatively limited dispersal, but...they are subject to the same processes that determine source-sink dynamics of longer-lived, more vagile taxa.1.2 Technical Approach: For two of our

  2. Challenges for Knowledge Management in the Context of IT Global Sourcing Models Implementation

    OpenAIRE

    Perechuda , Kazimierz; Sobińska , Małgorzata

    2014-01-01

    Part 2: Models and Functioning of Knowledge Management; International audience; The article gives a literature overview of the current challenges connected with the implementation of the newest IT sourcing models. In the dynamic environment, organizations are required to build their competitive advantage not only on their own resources, but also on resources commissioned from external providers, accessed through various forms of sourcing, including the sourcing of IT services. This paper pres...

  3. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  4. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  5. Free span burial inspection pig. Phase B

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-04-01

    This report deals with design and construction of a pipeline pig for on-line internal inspection of offshore trenched gas pipelines for pipeline burial, free spans, exposures and loss of concrete weight coating. The measuring principle uses detection of the natural gamma radiation emitted by sea bed formations and the concrete coating of the pipe to map pipeline condition. The gamma ray flux penetrating to the internal side of the pipeline is an effect of the outside conditions. The measuring principle was confirmed in a occasionally present in the gas, blurred seriously sensor signals of the previous instrumentation. The continued project activities have been divided in two phases. Phase A comprised design and construction of a detector system, which could identify and quantify radioactive components from decay of radon-222. During Phase A a new gamma detector was tested in full scale exposed to radon-222. New data analysis procedures for the correction for the influence of radon-222 inside the pipeline, where developed and its utility successfully demonstrated. During Phase B the new detector was mounted in a pipeline pig constructed for inspection of 30-inch gas pipelines. Working conditions were demonstrated in three runs through the southern route of the DONG owned 30-inch gas pipelines crossing the Danish strait named the Great Belt. The FSB-technology found 88% of the free spans identified with the latest acoustic survey. The FSB-technology found in addition 22 free spans that were termed ''invisible'', because they were not identified by the most recent acoustic survey. It is believed that ''invisible free spans'' are either real free spans or locations, where the pipeline has no or very little support from deposits in the pipeline trench. The FSB-survey confirmed all exposed sections longer than 20 metres found by the acoustic survey in the first 21 kilometre of the pipeline. However, the FSB-survey underestimated

  6. Modeling of magnetically enhanced capacitively coupled plasma sources: Ar discharges

    International Nuclear Information System (INIS)

    Kushner, Mark J.

    2003-01-01

    Magnetically enhanced capacitively coupled plasma sources use transverse static magnetic fields to modify the performance of low pressure radio frequency discharges. Magnetically enhanced reactive ion etching (MERIE) sources typically use magnetic fields of tens to hundreds of Gauss parallel to the substrate to increase the plasma density at a given pressure or to lower the operating pressure. In this article results from a two-dimensional hybrid-fluid computational investigation of MERIE reactors with plasmas sustained in argon are discussed for an industrially relevant geometry. The reduction in electron cross field mobility as the magnetic field increases produces a systematic decrease in the dc bias (becoming more positive). This decrease is accompanied by a decrease in the energy and increase in angular spread of the ion flux to the substrate. Similar trends are observed when decreasing pressure for a constant magnetic field. Although for constant power the magnitudes of ion fluxes to the substrate increase with moderate magnetic fields, the fluxes decreased at larger magnetic fields. These trends are due, in part, to a reduction in the contributions of more efficient multistep ionization

  7. A study of electron-positron pair equilibria in models of compact X- and gamma-ray sources

    International Nuclear Information System (INIS)

    Bjoernsson, G.

    1990-01-01

    Thermal electron-positron pair equilibria in two temperature models of compact x ray and gamma ray sources are studied. The pairs are assumed to be heated by Coulomb interaction with the much hotter protons and cooled by bremsstrahlung emission, Compton scattering, and annihilation. Two parameters, the proton optical depth and the compactness, characterize each equilibrium state. It is shown that a careful account of the energy balance is very important when the stability properties of the pair equilibria in a spherical plasma cloud are determined. The equilibria are found to be unstable in a very limited range of compactness and proton optical depth. This particular instability is unlikely to be the cause of the observed variability of the compact sources and implies that it is possible to build up high pair densities by a thermal mechanism in two temperature environments. The most important result considers the effects of pairs on the structure of geometrically and effectively optically thin accretion disks. A new approach for solving for the equilibrium structure of the disks is presented. In effect, the pair equilibrium states are projected into the space spanned by the disk structure parameters. This allows a direct visualization of all possible disk solutions at once. Each solution profile needs to be calculated only once and a complete disk solution is obtained by a simple radial coordinate transformation. The disk solutions are thus seen to be scale free in terms of the radial coordinate as well as in terms of the mass of the central object and the accretion rate. Two particular disk solutions are given. It is shown that including electron-positron pairs in the disk structure calculations leads to a breakdown of the thin disk assumptions and that more detailed disk modeling is required before electron-positron pairs can be self-consistently included

  8. Mathematical models of thermohydraulic disturbance sources in the NPP circuits

    International Nuclear Information System (INIS)

    Proskuryakov, K.N.

    1999-01-01

    Methods and means of diagnostics of equipment and processes at NPPs allowing one to substantially increase safety and economic efficiency of nuclear power plant operation are considered. Development of mathematical models, describing the occurrence and propagation of violations is conducted

  9. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... This thesis draws on available data from the electronics integrated circuit industry to attempt to assess whether statistical modeling offers a viable method for predicting the presence of DMSMS...

  10. Boundary Spanning in Global Software Development

    DEFF Research Database (Denmark)

    Søderberg, Anne-Marie; Romani, Laurence

    imbalances of power, exacerbated in the case of an Indian vendor and a European client, need to be taken into account. The paper thus contributes with a more context sensitive understanding of inter-organizational boundary work. Taking the vendor perspective also leads to problematization of common...... of Indian IT vendor managers who are responsible for developing client relations and coordinating complex global development projects. The authors revise a framework of boundary spanning leadership practices to adapt it to an offshore outsourcing context. The empirical investigation highlights how...

  11. Signal Enhancement with Variable Span Linear Filters

    DEFF Research Database (Denmark)

    Benesty, Jacob; Christensen, Mads Græsbøll; Jensen, Jesper Rindom

    . Variable span filters combine the ideas of optimal linear filters with those of subspace methods, as they involve the joint diagonalization of the correlation matrices of the desired signal and the noise. The book shows how some well-known filter designs, e.g. the minimum distortion, maximum signal...... the time and STFT domains, and, lastly, in time-domain binaural enhancement. In these contexts, the properties of these filters are analyzed in terms of their noise reduction capabilities and desired signal distortion, and the analyses are validated and further explored in simulations....

  12. Robustness of Long Span Reciprocal Timber Structures

    DEFF Research Database (Denmark)

    Balfroid, Nathalie; Kirkegaard, Poul Henning

    2011-01-01

    engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper makes a discussion of such robustness issues related to the future development of reciprocal timber structures. The paper concludes that these kind of structures can have...... a potential as long span timber structures in real projects if they are carefully designed with respect to the overall robustness strategies.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. The interest has also been facilitated due to recently severe structural failures...

  13. Analysis of protection spanning-tree protocol

    Directory of Open Access Journals (Sweden)

    Б.Я. Корнієнко

    2007-01-01

    Full Text Available  Extraordinary sweeping  of  IT – development  causes vulnerabilities and, thereafter, attacks that use these vulnerabilities. That is why one must post factum or even in advance speed up invention of new information  security systems as well as develop the old ones. The matter of article concerns Spanning-Tree Protocol  – the vivid example of the case, when the cure of the vulnerability creates dozen of new "weak spots".

  14. Computer modelling of radioactive source terms at a tokamak reactor

    International Nuclear Information System (INIS)

    Meide, A.

    1984-12-01

    The Monte Carlo code MCNP has been used to create a simple three-dimensional mathematical model representing 1/12 of a tokamak fusion reactor for studies of the exposure rate level from neutrons as well as gamma rays from the activated materials, and for later estimates of the consequences to the environment, public, and operating personnel. The model is based on the recommendations from the NET/INTOR workshops. (author)

  15. Considering a point-source in a regional air pollution model; Prise en compte d`une source ponctuelle dans un modele regional de pollution atmospherique

    Energy Technology Data Exchange (ETDEWEB)

    Lipphardt, M.

    1997-06-19

    This thesis deals with the development and validation of a point-source plume model, with the aim to refine the representation of intensive point-source emissions in regional-scale air quality models. The plume is modelled at four levels of increasing complexity, from a modified Gaussian plume model to the Freiberg and Lusis ring model. Plume elevation is determined by Netterville`s plume rise model, using turbulence and atmospheric stability parameters. A model for the effect of a fine-scale turbulence on the mean concentrations in the plume is developed and integrated in the ring model. A comparison between results with and without considering micro-mixing shows the importance of this effect in a chemically reactive plume. The plume model is integrated into the Eulerian transport/chemistry model AIRQUAL, using an interface between Airqual and the sub-model, and interactions between the two scales are described. A simulation of an air pollution episode over Paris is carried out, showing that the utilization of such a sub-scale model improves the accuracy of the air quality model

  16. Modeling the NPE with finite sources and empirical Green`s functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Kasameyer, P.; Goldstein, P. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-12-31

    In order to better understand the source characteristics of both nuclear and chemical explosions for purposes of discrimination, we have modeled the NPE chemical explosion as a finite source and with empirical Green`s functions. Seismograms are synthesized at four sties to test the validity of source models. We use a smaller chemical explosion detonated in the vicinity of the working point to obtain empirical Green`s functions. Empirical Green`s functions contain all the linear information of the geology along the propagation path and recording site, which are identical for chemical or nuclear explosions, and therefore reduce the variability in modeling the source of the larger event. We further constrain the solution to have the overall source duration obtained from point-source deconvolution results. In modeling the source, we consider both an elastic source on a spherical surface and an inelastic expanding spherical volume source. We found that the spherical volume solution provides better fits to observed seismograms. The potential to identify secondary sources was examined, but the resolution is too poor to be definitive.

  17. Information contraction and extraction by multivariate autoregressive (MAR) modelling. Pt. 2. Dominant noise sources in BWRS

    International Nuclear Information System (INIS)

    Morishima, N.

    1996-01-01

    The multivariate autoregressive (MAR) modeling of a vector noise process is discussed in terms of the estimation of dominant noise sources in BWRs. The discussion is based on a physical approach: a transfer function model on BWR core dynamics is utilized in developing a noise model; a set of input-output relations between three system variables and twelve different noise sources is obtained. By the least-square fitting of a theoretical PSD on neutron noise to an experimental one, four kinds of dominant noise sources are selected. It is shown that some of dominant noise sources consist of two or more different noise sources and have the spectral properties of being coloured and correlated with each other. By diagonalizing the PSD matrix for dominant noise sources, we may obtain an MAR expression for a vector noise process as a response to the diagonal elements(i.e. residual noises) being white and mutually-independent. (Author)

  18. Source term model evaluations for the low-level waste facility performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Yim, M.S.; Su, S.I. [North Carolina State Univ., Raleigh, NC (United States)

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  19. Asteroid models from photometry and complementary data sources

    Energy Technology Data Exchange (ETDEWEB)

    Kaasalainen, Mikko [Department of Mathematics, Tampere University of Technology (Finland)

    2016-05-10

    I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze, and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.

  20. Asteroid models from photometry and complementary data sources

    International Nuclear Information System (INIS)

    Kaasalainen, Mikko

    2016-01-01

    I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze, and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.

  1. Modelling RF-plasma interaction in ECR ion sources

    Directory of Open Access Journals (Sweden)

    Mascali David

    2017-01-01

    Full Text Available This paper describes three-dimensional self-consistent numerical simulations of wave propagation in magnetoplasmas of Electron cyclotron resonance ion sources (ECRIS. Numerical results can give useful information on the distribution of the absorbed RF power and/or efficiency of RF heating, especially in the case of alternative schemes such as mode-conversion based heating scenarios. Ray-tracing approximation is allowed only for small wavelength compared to the system scale lengths: as a consequence, full-wave solutions of Maxwell-Vlasov equation must be taken into account in compact and strongly inhomogeneous ECRIS plasmas. This contribution presents a multi-scale temporal domains approach for simultaneously including RF dynamics and plasma kinetics in a “cold-plasma”, and some perspectives for “hot-plasma” implementation. The presented results rely with the attempt to establish a modal-conversion scenario of OXB-type in double frequency heating inside an ECRIS testbench.

  2. Current-voltage model of LED light sources

    DEFF Research Database (Denmark)

    Beczkowski, Szymon; Munk-Nielsen, Stig

    2012-01-01

    Amplitude modulation is rarely used for dimming light-emitting diodes in polychromatic luminaires due to big color shifts caused by varying magnitude of LED driving current and nonlinear relationship between intensity of a diode and driving current. Current-voltage empirical model of light...

  3. On the sources of technological change: What do the models assume?

    International Nuclear Information System (INIS)

    Clarke, Leon; Weyant, John; Edmonds, Jae

    2008-01-01

    It is widely acknowledged that technological change can substantially reduce the costs of stabilizing atmospheric concentrations of greenhouse gases. This paper discusses the sources of technological change and the representations of these sources in formal models of energy and the environment. The paper distinguishes between three major sources of technological change-R and D, learning-by-doing and spillovers-and introduces a conceptual framework for linking modeling approaches to assumptions about these real-world sources. A selective review of modeling approaches, including those employing exogenous technological change, suggests that most formal models have meaningful real-world interpretations that focus on a subset of possible sources of technological change while downplaying the roles of others

  4. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  5. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  6. Spatial and frequency domain ring source models for the single muscle fiber action potential

    DEFF Research Database (Denmark)

    Henneberg, Kaj-åge; R., Plonsey

    1994-01-01

    In the paper, single-fibre models for the extracellular action potential are developed that will allow the potential to the evaluated at an arbitrary field point in the extracellular space. Fourier-domain models are restricted in that they evaluate potentials at equidistant points along a line...... parallel to the fibre axis. Consequently, they cannot easily evaluate the potential at the boundary nodes of a boundary-element electrode model. The Fourier-domain models employ axial-symmetric ring source models, and thereby provide higher accuracy that the line source model, where the source is lumped...... including anisotropy show that the spatial models require extreme care in the integration procedure owing to the singularity in the weighting functions. With adequate sampling, the spatial models can evaluate extracellular potentials with high accuracy....

  7. Diamond carbon sources: a comparison of carbon isotope models

    International Nuclear Information System (INIS)

    Kirkley, M.B.; Otter, M.L.; Gurney, J.J.; Hill, S.J.

    1990-01-01

    The carbon isotope compositions of approximately 500 inclusion-bearing diamonds have been determined in the past decade. 98 percent of these diamonds readily fall into two broad categories on the basis of their inclusion mineralogies and compositions. These categories are peridotitic diamonds and eclogitic diamonds. Most peridotitic diamonds have δ 13 C values between -10 and -1 permil, whereas eclogitic diamonds have δ 13 C values between -28 and +2 permil. Peridotitic diamonds may represent primordial carbon, however, it is proposed that initially inhomogeneous δ 13 C values were subsequently homogenized, e.g. during melting and convection that is postulated to have occurred during the first billion years of the earth's existence. If this is the case, then the wider range of δ 13 C values exhibited by eclogitic diamonds requires a different explanation. Both the fractionation model and the subduction model can account for the range of observed δ 13 C values in eclogitic diamonds. 16 refs., 2 figs

  8. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Van Luik, A.E.; Williford, R.E.; Doctor, P.G.; Pacific Northwest Lab., Richland, WA; Roy F. Weston, Inc./Rogers and Assoc. Engineering Corp., Rockville, MD)

    1984-01-01

    Part of a strategy for evaluating the compliance of geologic repositories with Federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilitistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative released from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  9. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Doctor, P.G.; Williford, R.E.; Van Luik, A.E.

    1984-11-01

    Part of a strategy for evaluating the compliance of geologic repositories with federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative releases from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  10. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  11. Calculation of media temperatures for nuclear sources in geologic depositories by a finite-length line source superposition model (FLLSSM)

    Energy Technology Data Exchange (ETDEWEB)

    Kays, W M; Hossaini-Hashemi, F [Stanford Univ., Palo Alto, CA (USA). Dept. of Mechanical Engineering; Busch, J S [Kaiser Engineers, Oakland, CA (USA)

    1982-02-01

    A linearized transient thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high-level waste or spent fuel assemblies are represented as finite-length line sources in a continuous medium. The combined effects of multiple canisters in a representative storage pattern can be established in the medium at selected point of interest by superposition of the temperature rises calculated for each canister. A mathematical solution of the calculation for each separate source is given in this article, permitting a slow hand calculation. The full report, ONWI-94, contains the details of the computer code FLLSSM and its use, yielding the total solution in one computer output.

  12. SHEDS-HT: an integrated probabilistic exposure model for prioritizing exposures to chemicals with near-field and dietary sources.

    Science.gov (United States)

    Isaacs, Kristin K; Glen, W Graham; Egeghy, Peter; Goldsmith, Michael-Rock; Smith, Luther; Vallero, Daniel; Brooks, Raina; Grulke, Christopher M; Özkaynak, Halûk

    2014-11-04

    United States Environmental Protection Agency (USEPA) researchers are developing a strategy for high-throughput (HT) exposure-based prioritization of chemicals under the ExpoCast program. These novel modeling approaches for evaluating chemicals based on their potential for biologically relevant human exposures will inform toxicity testing and prioritization for chemical risk assessment. Based on probabilistic methods and algorithms developed for The Stochastic Human Exposure and Dose Simulation Model for Multimedia, Multipathway Chemicals (SHEDS-MM), a new mechanistic modeling approach has been developed to accommodate high-throughput (HT) assessment of exposure potential. In this SHEDS-HT model, the residential and dietary modules of SHEDS-MM have been operationally modified to reduce the user burden, input data demands, and run times of the higher-tier model, while maintaining critical features and inputs that influence exposure. The model has been implemented in R; the modeling framework links chemicals to consumer product categories or food groups (and thus exposure scenarios) to predict HT exposures and intake doses. Initially, SHEDS-HT has been applied to 2507 organic chemicals associated with consumer products and agricultural pesticides. These evaluations employ data from recent USEPA efforts to characterize usage (prevalence, frequency, and magnitude), chemical composition, and exposure scenarios for a wide range of consumer products. In modeling indirect exposures from near-field sources, SHEDS-HT employs a fugacity-based module to estimate concentrations in indoor environmental media. The concentration estimates, along with relevant exposure factors and human activity data, are then used by the model to rapidly generate probabilistic population distributions of near-field indirect exposures via dermal, nondietary ingestion, and inhalation pathways. Pathway-specific estimates of near-field direct exposures from consumer products are also modeled

  13. Evaluation of the influence of uncertain forward models on the EEG source reconstruction problem

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    in the different areas of the brain when noise is present. Results Due to mismatch between the true and experimental forward model, the reconstruction of the sources is determined by the angles between the i'th forward field associated with the true source and the j'th forward field in the experimental forward...... representation of the signal. Conclusions This analysis demonstrated that caution is needed when evaluating the source estimates in different brain regions. Moreover, we demonstrated the importance of reliable forward models, which may be used as a motivation for including the forward model uncertainty...

  14. Identifying the Source of Misfit in Item Response Theory Models.

    Science.gov (United States)

    Liu, Yang; Maydeu-Olivares, Alberto

    2014-01-01

    When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.

  15. Investigations of incorporating source directivity into room acoustics computer models to improve auralizations

    Science.gov (United States)

    Vigeant, Michelle C.

    Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different

  16. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    International Nuclear Information System (INIS)

    Song Yu; Dai Wei; Shao Min; Liu Ying; Lu Sihua; Kuster, William; Goldan, Paul

    2008-01-01

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles

  17. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    Energy Technology Data Exchange (ETDEWEB)

    Song Yu; Dai Wei [Department of Environmental Sciences, Peking University, Beijing 100871 (China); Shao Min [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China)], E-mail: mshao@pku.edu.cn; Liu Ying; Lu Sihua [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China); Kuster, William; Goldan, Paul [Chemical Sciences Division, NOAA Earth System Research Laboratory, Boulder, CO 80305 (United States)

    2008-11-15

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles.

  18. Life span study report, 11, part 2

    International Nuclear Information System (INIS)

    Shimizu, Yukiko; Kato, Hiroo; Schull, W.J.

    1988-12-01

    ABCC and its successor, RERF, have followed since 1959 and retrospectively to 1950 the mortality in a fixed cohort of survivors of the atomic bombings of Hiroshima and Nagasaki, the so-called Life Span Study sample. The present study, the 11th in a series that began in 1961, extends the surveillance period three more years and covers the period 1950-85. It is based on the recently revised dose system, called DS86, that has replaced previous estimates of individual exposures. The impact of the change from the old system of dosimetry, the T65DR, to the new on the dose-response relationships for cancer mortality was described in the first of this series of reports. Here, the focus is on cancer mortality among the 76,000 A-bomb survivors within the LSS sample for whom DS86 doses have been estimated, with the emphasis on biological issues associated with radiation carcinogenesis. (author)

  19. Image Segmentation Using Minimum Spanning Tree

    Science.gov (United States)

    Dewi, M. P.; Armiati, A.; Alvini, S.

    2018-04-01

    This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.

  20. Phonological similarity in working memory span tasks.

    Science.gov (United States)

    Chow, Michael; Macnamara, Brooke N; Conway, Andrew R A

    2016-08-01

    In a series of four experiments, we explored what conditions are sufficient to produce a phonological similarity facilitation effect in working memory span tasks. By using the same set of memoranda, but differing the secondary-task requirements across experiments, we showed that a phonological similarity facilitation effect is dependent upon the semantic relationship between the memoranda and the secondary-task stimuli, and is robust to changes in the representation, ordering, and pool size of the secondary-task stimuli. These findings are consistent with interference accounts of memory (Brown, Neath, & Chater, Psychological Review, 114, 539-576, 2007; Oberauer, Lewandowsky, Farrell, Jarrold, & Greaves, Psychonomic Bulletin & Review, 19, 779-819, 2012), whereby rhyming stimuli provide a form of categorical similarity that allows distractors to be excluded from retrieval at recall.

  1. Hanford tank residual waste - Contaminant source terms and release models

    International Nuclear Information System (INIS)

    Deutsch, William J.; Cantrell, Kirk J.; Krupka, Kenneth M.; Lindberg, Michael L.; Jeffery Serne, R.

    2011-01-01

    Highlights: → Residual waste from five Hanford spent fuel process storage tanks was evaluated. → Gibbsite is a common mineral in tanks with high Al concentrations. → Non-crystalline U-Na-C-O-P ± H phases are common in the U-rich residual. → Iron oxides/hydroxides have been identified in all residual waste samples. → Uranium release is highly dependent on waste and leachant compositions. - Abstract: Residual waste is expected to be left in 177 underground storage tanks after closure at the US Department of Energy's Hanford Site in Washington State, USA. In the long term, the residual wastes may represent a potential source of contamination to the subsurface environment. Residual materials that cannot be completely removed during the tank closure process are being studied to identify and characterize the solid phases and estimate the release of contaminants from these solids to water that might enter the closed tanks in the future. As of the end of 2009, residual waste from five tanks has been evaluated. Residual wastes from adjacent tanks C-202 and C-203 have high U concentrations of 24 and 59 wt.%, respectively, while residual wastes from nearby tanks C-103 and C-106 have low U concentrations of 0.4 and 0.03 wt.%, respectively. Aluminum concentrations are high (8.2-29.1 wt.%) in some tanks (C-103, C-106, and S-112) and relatively low ( 2 -saturated solution, or a CaCO 3 -saturated water. Uranium release concentrations are highly dependent on waste and leachant compositions with dissolved U concentrations one or two orders of magnitude higher in the tests with high U residual wastes, and also higher when leached with the CaCO 3 -saturated solution than with the Ca(OH) 2 -saturated solution. Technetium leachability is not as strongly dependent on the concentration of Tc in the waste, and it appears to be slightly more leachable by the Ca(OH) 2 -saturated solution than by the CaCO 3 -saturated solution. In general, Tc is much less leachable (<10 wt.% of the

  2. Analytic sensing for multi-layer spherical models with application to EEG source imaging

    OpenAIRE

    Kandaswamy, Djano; Blu, Thierry; Van De Ville, Dimitri

    2013-01-01

    Source imaging maps back boundary measurements to underlying generators within the domain; e. g., retrieving the parameters of the generating dipoles from electrical potential measurements on the scalp such as in electroencephalography (EEG). Fitting such a parametric source model is non-linear in the positions of the sources and renewed interest in mathematical imaging has led to several promising approaches. One important step in these methods is the application of a sensing principle that ...

  3. Parallel Beam Dynamics Simulation Tools for Future Light Source Linac Modeling

    International Nuclear Information System (INIS)

    Qiang, Ji; Pogorelov, Ilya v.; Ryne, Robert D.

    2007-01-01

    Large-scale modeling on parallel computers is playing an increasingly important role in the design of future light sources. Such modeling provides a means to accurately and efficiently explore issues such as limits to beam brightness, emittance preservation, the growth of instabilities, etc. Recently the IMPACT codes suite was enhanced to be applicable to future light source design. Simulations with IMPACT-Z were performed using up to one billion simulation particles for the main linac of a future light source to study the microbunching instability. Combined with the time domain code IMPACT-T, it is now possible to perform large-scale start-to-end linac simulations for future light sources, including the injector, main linac, chicanes, and transfer lines. In this paper we provide an overview of the IMPACT code suite, its key capabilities, and recent enhancements pertinent to accelerator modeling for future linac-based light sources

  4. Sources of motivation, interpersonal conflict management styles, and leadership effectiveness: a structural model.

    Science.gov (United States)

    Barbuto, John E; Xu, Ye

    2006-02-01

    126 leaders and 624 employees were sampled to test the relationship between sources of motivation and conflict management styles of leaders and how these variables influence effectiveness of leadership. Five sources of motivation measured by the Motivation Sources Inventory were tested-intrinsic process, instrumental, self-concept external, self-concept internal, and goal internalization. These sources of work motivation were associated with Rahim's modes of interpersonal conflict management-dominating, avoiding, obliging, complying, and integrating-and to perceived leadership effectiveness. A structural equation model tested leaders' conflict management styles and leadership effectiveness based upon different sources of work motivation. The model explained variance for obliging (65%), dominating (79%), avoiding (76%), and compromising (68%), but explained little variance for integrating (7%). The model explained only 28% of the variance in leader effectiveness.

  5. On the cardinality of smallest spanning sets of rings | Boudi ...

    African Journals Online (AJOL)

    Let R = (R, +, ·) be a ring. Then Z ⊆ R is called spanning if the R-module generated by Z is equal to the ring R. A spanning set Z ⊆ R is called smallest if there is no spanning set of smaller cardinality than Z. It will be shown that the cardinality of a smallest spanning set of a ring R is not always decidable. In particular, a ring R ...

  6. Mitigating the Impact of Nurse Manager Large Spans of Control.

    Science.gov (United States)

    Simpson, Brenda Baird; Dearmon, Valorie; Graves, Rebecca

    Nurse managers are instrumental in achievement of organizational and unit performance goals. Greater spans of control for managers are associated with decreased satisfaction and performance. An interprofessional team measured one organization's nurse manager span of control, providing administrative assistant support and transformational leadership development to nurse managers with the largest spans of control. Nurse manager satisfaction and transformational leadership competency significantly improved following the implementation of large span of control mitigation strategies.

  7. Total Variability Modeling using Source-specific Priors

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2016-01-01

    sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...... trained matrix, for the short2-short3 task in SRE’08, five out of eight female and four out of eight male common conditions, were improved. For the core-extended task in SRE’10, four out of nine female and six out of nine male common conditions were improved. When incorporating prior information...

  8. Tracking the MSL-SAM methane detection source location Through Mars Regional Atmospheric Modeling System (MRAMS)

    Science.gov (United States)

    Pla-García, Jorge

    2016-04-01

    1. Introduction: The putative in situ detection of methane by Sample Analysis at Mars (SAM) instrument suite on Curiosi-ty at Gale crater has garnered significant attention because of the potential implications for the presence of geological methane sources or indigenous Martian organisms [1, 2]. SAM reported detection of back-ground levels of atmospheric methane of mean value 0.69±0.25 parts per billion by volume (ppbv) at the 95% confidence interval (CI). Additionally, in four sequential measurements spanning a 60-sol period, SAM observed elevated levels of methane of 7.2±2.1 ppbv (95% CI), implying that Mars is episodically producing methane from an additional unknown source. There are many major unresolved questions regard-ing this detection: 1) What are the potential sources of the methane release? 2) What causes the rapid decrease in concentration? and 3) Where is the re-lease location? 4) How spatially extensive is the re-lease? 5) For how long is CH4 released? Regarding the first question, the source of methane, is so far not identified. It could be related with geo-logical process like methane release from clathrates [3], serpentinisation [4] and volcanism [5]; or due to biological activity from methanogenesis [6]. To answer the second question, the rapid decrease in concentration, it is important to note that the photo-chemical lifetime of methane is of order 100 years, much longer than the atmospheric mixing time scale, and thus the gas should tend to be well mixed except near a source or shortly after an episodic release. The observed spike of 7 ppb from the background of System (MRAMS). The model was focused on rover locations using nested grids with a spacing of 330 meters on the in-nermost grid that is centered over the landing [8, 9]. MRAMS is ideally suited for this investigation; the model is explicitly designed to simulate Mars' at-mospheric circulations at the mesoscale and smaller with realistic, high-resolution surface properties [10, 11

  9. Receptor modeling for source apportionment of polycyclic aromatic hydrocarbons in urban atmosphere.

    Science.gov (United States)

    Singh, Kunwar P; Malik, Amrita; Kumar, Ranjan; Saxena, Puneet; Sinha, Sarita

    2008-01-01

    This study reports source apportionment of polycyclic aromatic hydrocarbons (PAHs) in particulate depositions on vegetation foliages near highway in the urban environment of Lucknow city (India) using the principal components analysis/absolute principal components scores (PCA/APCS) receptor modeling approach. The multivariate method enables identification of major PAHs sources along with their quantitative contributions with respect to individual PAH. The PCA identified three major sources of PAHs viz. combustion, vehicular emissions, and diesel based activities. The PCA/APCS receptor modeling approach revealed that the combustion sources (natural gas, wood, coal/coke, biomass) contributed 19-97% of various PAHs, vehicular emissions 0-70%, diesel based sources 0-81% and other miscellaneous sources 0-20% of different PAHs. The contributions of major pyrolytic and petrogenic sources to the total PAHs were 56 and 42%, respectively. Further, the combustion related sources contribute major fraction of the carcinogenic PAHs in the study area. High correlation coefficient (R2 > 0.75 for most PAHs) between the measured and predicted concentrations of PAHs suggests for the applicability of the PCA/APCS receptor modeling approach for estimation of source contribution to the PAHs in particulates.

  10. Source modelling of train noise - Literature review and some initial measurements

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Xuetao; Jonasson, Hans; Holmberg, Kjell

    2000-07-01

    A literature review of source modelling of railway noise is reported. Measurements on a special test rig at Surahammar and on the new railway line between Arlanda and Stockholm City are reported and analyzed. In the analysis the train is modelled as a number of point sources with or without directivity and each source is combined with analytical sound propagation theory to predict the sound propagation pattern best fitting the measured data. Wheel/rail rolling noise is considered to be the most important noise source. The rolling noise can be modelled as an array of moving point sources, which have a dipole-like horizontal directivity and some kind of vertical directivity. In general it is necessary to distribute the point sources on several heights. Based on our model analysis the source heights for the rolling noise should be below the wheel axles and the most important height is about a quarter of wheel diameter above the railheads. When train speeds are greater than 250 km/h aerodynamic noise will become important and even dominant. It may be important for low frequency components only if the train speed is less than 220 km/h. Little data are available for these cases. It is believed that aerodynamic noise has dipole-like directivity. Its spectrum depends on many factors: speed, railway system, type of train, bogies, wheels, pantograph, presence of barriers and even weather conditions. Other sources such as fans, engine, transmission and carriage bodies are at most second order noise sources, but for trains with a diesel locomotive engine the engine noise will be dominant if train speeds are less than about 100 km/h. The Nord 2000 comprehensive model for sound propagation outdoors, together with the source model that is based on the understandings above, can suitably handle the problems of railway noise propagation in one-third octave bands although there are still problems left to be solved.

  11. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    Science.gov (United States)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  12. Modeling and analysis of a transcritical rankine power cycle with a low grade heat source

    DEFF Research Database (Denmark)

    Nguyen, Chan; Veje, Christian

    efficiency, exergetic efficiency and specific net power output. A generic cycle configuration has been used for analysis of a geothermal energy heat source. This model has been validated against similar calculations using industrial waste heat as the energy source. Calculations are done with fixed...

  13. Free Open Source Software: Social Phenomenon, New Management, New Business Models

    Directory of Open Access Journals (Sweden)

    Žilvinas Jančoras

    2011-08-01

    Full Text Available In the paper assumptions of free open source software existence, development, financing and competition models are presented. The free software as a social phenomenon and the open source software as the technological and managerial innovation environment are revealed. The social and business interaction processes are analyzed.Article in Lithuanian

  14. Parsing pyrogenic polycyclic aromatic hydrocarbons: forensic chemistry, receptor models, and source control policy.

    Science.gov (United States)

    O'Reilly, Kirk T; Pietari, Jaana; Boehm, Paul D

    2014-04-01

    A realistic understanding of contaminant sources is required to set appropriate control policy. Forensic chemical methods can be powerful tools in source characterization and identification, but they require a multiple-lines-of-evidence approach. Atmospheric receptor models, such as the US Environmental Protection Agency (USEPA)'s chemical mass balance (CMB), are increasingly being used to evaluate sources of pyrogenic polycyclic aromatic hydrocarbons (PAHs) in sediments. This paper describes the assumptions underlying receptor models and discusses challenges in complying with these assumptions in practice. Given the variability within, and the similarity among, pyrogenic PAH source types, model outputs are sensitive to specific inputs, and parsing among some source types may not be possible. Although still useful for identifying potential sources, the technical specialist applying these methods must describe both the results and their inherent uncertainties in a way that is understandable to nontechnical policy makers. The authors present an example case study concerning an investigation of a class of parking-lot sealers as a significant source of PAHs in urban sediment. Principal component analysis is used to evaluate published CMB model inputs and outputs. Targeted analyses of 2 areas where bans have been implemented are included. The results do not support the claim that parking-lot sealers are a significant source of PAHs in urban sediments. © 2013 SETAC.

  15. EU Regulation of E-Commercspan>e A Commentary

    NARCIS (Netherlands)

    Lodder, A.R.; Murray, A.D.

    2017-01-01

    For the last twenty years the European Union has been extremely active in the field of e-commercspan>e. This important new book addresses the key pieces of EU legislation in the field of e-commercspan>e, including the E-Commercspan>e Directive, the Services Directive, the Consumer Directive, the General Data

  16. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  17. Front-line managers as boundary spanners: effects of span and time on nurse supervision satisfaction.

    Science.gov (United States)

    Meyer, Raquel M; O'Brien-Pallas, Linda; Doran, Diane; Streiner, David; Ferguson-Paré, Mary; Duffield, Christine

    2011-07-01

    To examine the influence of nurse manager span (number of direct report staff), time in staff contact, transformational leadership practices and operational hours on nurse supervision satisfaction. Increasing role complexity has intensified the boundary spanning functions of managers. Because work demands and scope vary by management position, time in staff contact rather than span may better explain managers' capacity to support staff. A descriptive, correlational design was used to collect cross-sectional survey and prospective work log and administrative data from a convenience sample of 558 nurses in 51 clinical areas and 31 front-line nurse managers from four acute care hospitals in 2007-2008. Data were analysed using hierarchical linear modelling. Span, but not time in staff contact, interacted with leadership and operational hours to explain supervision satisfaction. With compressed operational hours, supervision satisfaction was lower with highly transformational leadership in combination with wider spans. With extended operational hours, supervision satisfaction was higher with highly transformational leadership, and this effect was more pronounced under wider spans. Operational hours, which influence the manager's daily span (average number of direct report staff working per weekday), should be factored into the design of front-line management positions. © 2011 The Authors. Journal compilation © 2011 Blackwell Publishing Ltd.

  18. The importance of adult life-span perspective in explaining variations in political ideology.

    Science.gov (United States)

    Sedek, Grzegorz; Kossowska, Malgorzata; Rydzewska, Klara

    2014-06-01

    As a comment on Hibbing et al.'s paper, we discuss the evolution of political and social views from more liberal to more conservative over the span of adulthood. We show that Hibbing et al.'s theoretical model creates a false prediction from this developmental perspective, as increased conservatism in the adult life-span trajectory is accompanied by the avoidance of negative bias.

  19. Fecal indicator organism modeling and microbial source tracking in environmental waters: Chapter 3.4.6

    Science.gov (United States)

    Nevers, Meredith; Byappanahalli, Muruleedhara; Phanikumar, Mantha S.; Whitman, Richard L.

    2016-01-01

    Mathematical models have been widely applied to surface waters to estimate rates of settling, resuspension, flow, dispersion, and advection in order to calculate movement of particles that influence water quality. Of particular interest are the movement, survival, and persistence of microbial pathogens or their surrogates, which may contaminate recreational water, drinking water, or shellfish. Most models devoted to microbial water quality have been focused on fecal indicator organisms (FIO), which act as a surrogate for pathogens and viruses. Process-based modeling and statistical modeling have been used to track contamination events to source and to predict future events. The use of these two types of models require different levels of expertise and input; process-based models rely on theoretical physical constructs to explain present conditions and biological distribution while data-based, statistical models use extant paired data to do the same. The selection of the appropriate model and interpretation of results is critical to proper use of these tools in microbial source tracking. Integration of the modeling approaches could provide insight for tracking and predicting contamination events in real time. A review of modeling efforts reveals that process-based modeling has great promise for microbial source tracking efforts; further, combining the understanding of physical processes influencing FIO contamination developed with process-based models and molecular characterization of the population by gene-based (i.e., biological) or chemical markers may be an effective approach for locating sources and remediating contamination in order to protect human health better.

  20. Modelling Nd-isotopes with a coarse resolution ocean circulation model: Sensitivities to model parameters and source/sink distributions

    International Nuclear Information System (INIS)

    Rempfer, Johannes; Stocker, Thomas F.; Joos, Fortunat; Dutay, Jean-Claude; Siddall, Mark

    2011-01-01

    The neodymium (Nd) isotopic composition (Nd) of seawater is a quasi-conservative tracer of water mass mixing and is assumed to hold great potential for paleo-oceanographic studies. Here we present a comprehensive approach for the simulation of the two neodymium isotopes 143 Nd, and 144 Nd using the Bern3D model, a low resolution ocean model. The high computational efficiency of the Bern3D model in conjunction with our comprehensive approach allows us to systematically and extensively explore the sensitivity of Nd concentrations and ε Nd to the parametrisation of sources and sinks. Previous studies have been restricted in doing so either by the chosen approach or by computational costs. Our study thus presents the most comprehensive survey of the marine Nd cycle to date. Our model simulates both Nd concentrations as well as ε Nd in good agreement with observations. ε Nd co-varies with salinity, thus underlining its potential as a water mass proxy. Results confirm that the continental margins are required as a Nd source to simulate Nd concentrations and ε Nd consistent with observations. We estimate this source to be slightly smaller than reported in previous studies and find that above a certain magnitude its magnitude affects ε Nd only to a small extent. On the other hand, the parametrisation of the reversible scavenging considerably affects the ability of the model to simulate both, Nd concentrations and ε Nd . Furthermore, despite their small contribution, we find dust and rivers to be important components of the Nd cycle. In additional experiments, we systematically varied the diapycnal diffusivity as well as the Atlantic-to-Pacific freshwater flux to explore the sensitivity of Nd concentrations and its isotopic signature to the strength and geometry of the overturning circulation. These experiments reveal that Nd concentrations and ε Nd are comparatively little affected by variations in diapycnal diffusivity and the Atlantic-to-Pacific freshwater flux

  1. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    Science.gov (United States)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  2. An incentive-based source separation model for sustainable municipal solid waste management in China.

    Science.gov (United States)

    Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin

    2015-05-01

    Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study. © The Author(s) 2015.

  3. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Science.gov (United States)

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  4. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  5. Major models and data sources for residential and commercial sector energy conservation analysis. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-09-01

    Major models and data sources are reviewed that can be used for energy-conservation analysis in the residential and commercial sectors to provide an introduction to the information that can or is available to DOE in order to further its efforts in analyzing and quantifying their policy and program requirements. Models and data sources examined in the residential sector are: ORNL Residential Energy Model; BECOM; NEPOOL; MATH/CHRDS; NIECS; Energy Consumption Data Base: Household Sector; Patterns of Energy Use by Electrical Appliances Data Base; Annual Housing Survey; 1970 Census of Housing; AIA Research Corporation Data Base; RECS; Solar Market Development Model; and ORNL Buildings Energy Use Data Book. Models and data sources examined in the commercial sector are: ORNL Commercial Sector Model of Energy Demand; BECOM; NEPOOL; Energy Consumption Data Base: Commercial Sector; F.W. Dodge Data Base; NFIB Energy Report for Small Businesses; ADL Commercial Sector Energy Use Data Base; AIA Research Corporation Data Base; Nonresidential Buildings Surveys of Energy Consumption; General Electric Co: Commercial Sector Data Base; The BOMA Commercial Sector Data Base; The Tishman-Syska and Hennessy Data Base; The NEMA Commercial Sector Data Base; ORNL Buildings Energy Use Data Book; and Solar Market Development Model. Purpose; basis for model structure; policy variables and parameters; level of regional, sectoral, and fuels detail; outputs; input requirements; sources of data; computer accessibility and requirements; and a bibliography are provided for each model and data source.

  6. Low doze γ-irradiation influence on drosophila life span in different genetics background

    International Nuclear Information System (INIS)

    Moskalev, A.

    2007-01-01

    Complete text of publication follows. The main goal of this work was to study in Drosophila melanogaster the contribution of DNA damage sensing and repair, apoptosis and heat shock defence into life span and physical activity alteration after gamma-irradiation at low doze rate. In our experiments, the strains were exposed to chronic gamma-irradiation from a 226Ra source (50 R/h) at doze rate 0.17 cGy/h at pre-imago development stages only. The absorbed radiation dose per generation (from embryo to imago, 12 days) was 60 cGy. Life span estimation was prepared in adult males and females separately. We compared the life span of apoptotic (p53, DIAP-1, dApaf-1, Dcp-1, reaper, grim and hid), heat shock defence (HSP70, HSP23, HSF), DNA damage sensing (ATR) and repair (XPF, XPC, PCNA, DSB repair helicase homologs) mutants after chronic irradiation with the control. On the basis of our investigation we have concluded: 1) Low doze irradiation alter the life span depending on genetic background (mutant alleles, heterozygosity level and sex); 2) Age dynamics of physical activity positively correlates with the life span; 3) Longevity potential forms at early development stages; 4) DNA damage sensing, DNA repair, heat shock defence and apoptosis as aging preventing mechanisms play crucial role in radiation-induced life span hormesis.

  7. Martian methane plume models for defining Mars rover methane source search strategies

    Science.gov (United States)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  8. Development of Realistic Head Models for Electromagnetic Source Imaging of the Human Brain

    National Research Council Canada - National Science Library

    Akalin, Z

    2001-01-01

    In this work, a methodology is developed to solve the forward problem of electromagnetic source imaging using realistic head models, For this purpose, first segmentation of the 3 dimensional MR head...

  9. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving

  10. Effects of Host-rock Fracturing on Elastic-deformation Source Models of Volcano Deflation.

    Science.gov (United States)

    Holohan, Eoghan P; Sudhaus, Henriette; Walter, Thomas R; Schöpfer, Martin P J; Walsh, John J

    2017-09-08

    Volcanoes commonly inflate or deflate during episodes of unrest or eruption. Continuum mechanics models that assume linear elastic deformation of the Earth's crust are routinely used to invert the observed ground motions. The source(s) of deformation in such models are generally interpreted in terms of magma bodies or pathways, and thus form a basis for hazard assessment and mitigation. Using discontinuum mechanics models, we show how host-rock fracturing (i.e. non-elastic deformation) during drainage of a magma body can progressively change the shape and depth of an elastic-deformation source. We argue that this effect explains the marked spatio-temporal changes in source model attributes inferred for the March-April 2007 eruption of Piton de la Fournaise volcano, La Reunion. We find that pronounced deflation-related host-rock fracturing can: (1) yield inclined source model geometries for a horizontal magma body; (2) cause significant upward migration of an elastic-deformation source, leading to underestimation of the true magma body depth and potentially to a misinterpretation of ascending magma; and (3) at least partly explain underestimation by elastic-deformation sources of changes in sub-surface magma volume.

  11. The eye-voice span during reading aloud

    Directory of Open Access Journals (Sweden)

    Jochen eLaubrock

    2015-09-01

    Full Text Available Although eye movements during reading are modulated by cognitive processing demands, they also reflect visual sampling of the input, and possibly preparation of output for speech or the inner voice. By simultaneously recording eye movements and the voice during reading aloud, we obtained an output measure that constrains the length of time spent on cognitive processing. Here we investigate the dynamics of the eye-voice span (EVS, the distance between eye and voice. We show that the EVS is regulated immediately during fixation of a word by either increasing fixation duration or programming a regressive eye movement against the reading direction. EVS size at the beginning of a fixation was positively correlated with the likelihood of regressions and refixations. Regression probability was further increased if the EVS was still large at the end of a fixation: if adjustment of fixation duration did not sufficiently reduce the EVS during a fixation, then a regression rather than a refixation followed with high probability. We further show that the EVS can help understand cognitive influences on fixation duration during reading: in mixed model analyses, the EVS was a stronger predictor of fixation durations than either word frequency or word length. The EVS modulated the influence of several other predictors on single fixation durations. For example, word-N frequency effects were larger with a large EVS, especially when word N-1 frequency was low. Finally, a comparison of single fixation durations during oral and silent reading showed that reading is governed by similar principles in both reading modes, although EVS maintenance and articulatory processing also cause some differences. In summary, the eye-voice span is regulated by adjusting fixation duration and/or by programming a regressive eye movement when the eye-voice span gets too large. Overall, the EVS appears to be directly related to updating of the working memory buffer during reading.

  12. Validation and calibration of structural models that combine information from multiple sources.

    Science.gov (United States)

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  13. Added-value joint source modelling of seismic and geodetic data

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source

  14. A Method of Auxiliary Sources Approach for Modelling the Impact of Ground Planes on Antenna

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2006-01-01

    The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements......The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements...

  15. Energy models for commercial energy prediction and substitution of renewable energy sources

    International Nuclear Information System (INIS)

    Iniyan, S.; Suganthi, L.; Samuel, Anand A.

    2006-01-01

    In this paper, three models have been projected namely Modified Econometric Mathematical (MEM) model, Mathematical Programming Energy-Economy-Environment (MPEEE) model, and Optimal Renewable Energy Mathematical (OREM) model. The actual demand for coal, oil and electricity is predicted using the MEM model based on economic, technological and environmental factors. The results were used in the MPEEE model, which determines the optimum allocation of commercial energy sources based on environmental limitations. The gap between the actual energy demand from the MEM model and optimal energy use from the MPEEE model, has to be met by the renewable energy sources. The study develops an OREM model that would facilitate effective utilization of renewable energy sources in India, based on cost, efficiency, social acceptance, reliability, potential and demand. The economic variations in solar energy systems and inclusion of environmental constraint are also analyzed with OREM model. The OREM model will help policy makers in the formulation and implementation of strategies concerning renewable energy sources in India for the next two decades

  16. Advance features in the SPAN and SPAN/XRF gamma ray and X ray spectrum analysis software

    International Nuclear Information System (INIS)

    Wang Liyu

    1998-01-01

    This paper describes the advanced techniques, integral peak background, experimental peak shape and complex peak shape, which have been used successfully in the software packages SPAN and SPAN/XRF to process gamma ray and X ray spectra from HPGe and Si(Li) detector. Main features of SPAN and SPAN/XRF are also described. The software runs on PC and has convenient graphical capabilities and a powerful user interface. (author)

  17. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  18. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  19. Reading Ability and Memory Span: Long-Term Memory Contributions to Span for Good and Poor Readers.

    Science.gov (United States)

    McDougall, Sine J. P.; Donohoe, Rachael

    2002-01-01

    Investigates the extent to which differences in memory span for good and poor readers can be explained by differences in a long-term memory component to span as well as by differences in short-term memory processes. Discusses the nature of the interrelationships between memory span, reading and measures of phonological awareness. (SG)

  20. Source Release Modeling for the Idaho National Engineering and Environmental Laboratory's Subsurface Disposal Area

    International Nuclear Information System (INIS)

    Becker, B.H.

    2002-01-01

    A source release model was developed to determine the release of contaminants into the shallow subsurface, as part of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) evaluation at the Idaho National Engineering and Environmental Laboratory's (INEEL) Subsurface Disposal Area (SDA). The output of the source release model is used as input to the subsurface transport and biotic uptake models. The model allowed separating the waste into areas that match the actual disposal units. This allows quantitative evaluation of the relative contribution to the total risk and allows evaluation of selective remediation of the disposal units within the SDA

  1. Receptor modeling studies for the characterization of PM10 pollution sources in Belgrade

    Directory of Open Access Journals (Sweden)

    Mijić Zoran

    2012-01-01

    Full Text Available The objective of this study is to determine the major sources and potential source regions of PM10 over Belgrade, Serbia. The PM10 samples were collected from July 2003 to December 2006 in very urban area of Belgrade and concentrations of Al, V, Cr, Mn, Fe, Ni, Cu, Zn, Cd and Pb were analyzed by atomic absorption spectrometry. The analysis of seasonal variations of PM10 mass and some element concentrations reported relatively higher concentrations in winter, what underlined the importance of local emission sources. The Unmix model was used for source apportionment purpose and the four main source profiles (fossil fuel combustion; traffic exhaust/regional transport from industrial centers; traffic related particles/site specific sources and mineral/crustal matter were identified. Among the resolved factors the fossil fuel combustion was the highest contributor (34% followed by traffic/regional industry (26%. Conditional probability function (CPF results identified possible directions of local sources. The potential source contribution function (PSCF and concentration weighted trajectory (CWT receptor models were used to identify spatial source distribution and contribution of regional-scale transported aerosols. [Projekat Ministarstva nauke Republike Srbije, br. III43007 i br. III41011

  2. Modeling generalized interline power-flow controller (GIPFC using 48-pulse voltage source converters

    Directory of Open Access Journals (Sweden)

    Amir Ghorbani

    2018-05-01

    Full Text Available Generalized interline power-flow controller (GIPFC is one of the voltage-source controller (VSC-based flexible AC transmission system (FACTS controllers that can independently regulate the power-flow over each transmission line of a multiline system. This paper presents the modeling and performance analysis of GIPFC based on 48-pulsed voltage-source converters. This paper deals with a cascaded multilevel converter model, which is a 48-pulse (three levels voltage source converter. The voltage source converter described in this paper is a harmonic neutralized, 48-pulse GTO converter. The GIPFC controller is based on d-q orthogonal coordinates. The algorithm is verified using simulations in MATLAB/Simulink environment. Comparisons between unified power flow controller (UPFC and GIPFC are also included. Keywords: Generalized interline power-flow controller (GIPFC, Voltage source converter (VCS, 48-pulse GTO converter

  3. Effects of source shape on the numerical aperture factor with a geometrical-optics model.

    Science.gov (United States)

    Wan, Der-Shen; Schmit, Joanna; Novak, Erik

    2004-04-01

    We study the effects of an extended light source on the calibration of an interference microscope, also referred to as an optical profiler. Theoretical and experimental numerical aperture (NA) factors for circular and linear light sources along with collimated laser illumination demonstrate that the shape of the light source or effective aperture cone is critical for a correct NA factor calculation. In practice, more-accurate results for the NA factor are obtained when a linear approximation to the filament light source shape is used in a geometric model. We show that previously measured and derived NA factors show some discrepancies because a circular rather than linear approximation to the filament source was used in the modeling.

  4. Achieving strategic renewal: the multi-level influences of top and middle managers’ boundary-spanning

    NARCIS (Netherlands)

    L. Glaser (Lotte); S.P.L. Fourné (Sebastian); T. Elfring (Tom)

    2015-01-01

    textabstractDrawing on corporate entrepreneurship (CE) and social network research, this study focuses on strategic renewal as a form of CE and examines the impact of boundary-spanning at top and middle management levels on business units’ exploratory innovation. Analyses of multi-source and

  5. Achieving strategic renewal: the multi-level influences of top and middle managers’ boundary-spanning

    NARCIS (Netherlands)

    Glaser, L.; Fourne, S.P.L.; Elfring, T.

    2015-01-01

    Drawing on corporate entrepreneurship (CE) and social network research, this study focuses on strategic renewal as a form of CE and examines the impact of boundary-spanning at top and middle management levels on business units’ exploratory innovation. Analyses of multi-source and multi-level data,

  6. Seismic response computations for a long span bridge

    International Nuclear Information System (INIS)

    McCallen, D.B.

    1994-01-01

    The authors are performing large-scale numerical computations to simulate the earthquake response of a major long-span bridge that crosses the San Francisco Bay. The overall objective of the study is to estimate the response of the bridge to potential large-magnitude earthquakes generated on the nearby San Andreas and Hayward earthquake faults. Generation of a realistic model of the bridge system is complicated by the existence of large pile group foundations that extend deep into soft, saturated clay soils, and by the numerous expansion joints that segment the overall bridge structure. In the current study, advanced, nonlinear, finite element technology is being applied to rigorously model the detailed behavior of the bridge system and to shed light on the influence of the foundations and joints of the bridge

  7. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  8. The Analytical Repository Source-Term (AREST) model: Description and documentation

    International Nuclear Information System (INIS)

    Liebetrau, A.M.; Apted, M.J.; Engel, D.W.; Altenhofen, M.K.; Strachan, D.M.; Reid, C.R.; Windisch, C.F.; Erikson, R.L.; Johnson, K.I.

    1987-10-01

    The geologic repository system consists of several components, one of which is the engineered barrier system. The engineered barrier system interfaces with natural barriers that constitute the setting of the repository. A model that simulates the releases from the engineered barrier system into the natural barriers of the geosphere, called a source-term model, is an important component of any model for assessing the overall performance of the geologic repository system. The Analytical Repository Source-Term (AREST) model being developed is one such model. This report describes the current state of development of the AREST model and the code in which the model is implemented. The AREST model consists of three component models and five process models that describe the post-emplacement environment of a waste package. All of these components are combined within a probabilistic framework. The component models are a waste package containment (WPC) model that simulates the corrosion and degradation processes which eventually result in waste package containment failure; a waste package release (WPR) model that calculates the rates of radionuclide release from the failed waste package; and an engineered system release (ESR) model that controls the flow of information among all AREST components and process models and combines release output from the WPR model with failure times from the WPC model to produce estimates of total release. 167 refs., 40 figs., 12 tabs

  9. Life span study report 9, part 3

    International Nuclear Information System (INIS)

    Wakabayashi, Toshiro; Kato, Hiroo; Ikeda, Takayoshi; Schull, W.J.

    1983-04-01

    The incidence of malignant tumors in the RERF Life Span Study (LSS) sample in Nagasaki as revealed by the Nagasaki Tumor Registry (Registry) has been investigated for the period 1959-78. No exposure status bias in data collection has been revealed. Neither method of diagnosis, reporting hospitals, nor the frequency of doubtful cases differ by exposure dose. Thus, the effect of a bias, if one exists, must be small and should not affect the interpretation of the results obtained in the present analysis. The risk of radiogenic cancer definitely increases with radiation dose for leukemia, cancer of the breast, lung, stomach, and thyroid, and suggestively so for cancer of the colon and urinary tract and multiple myeloma. However, there is no increase as yet for cancer of the esophagus, liver, gall bladder, uterus, ovary, and salivary gland, or for malignant lymphoma. For fatal cancers, these results strengthen those of the recent analysis of mortality based on death certificates on the same LSS cohort. In general, the relative risks based on incidence (that is, on Registry data) are either the same or slightly higher than those based on mortality for the same years; however, the absolute risk estimates (excess cancer per million person-year per rad) are far higher. (author)

  10. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2014-01-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...... are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm...... in the CHAOS-4 and MF7 models using more conventional spherical harmonic based approaches. Advantages of the equivalent source method include its local nature, allowing e.g. for regional grid refinement, and the ease of transforming to spherical harmonics when needed. Future applications will make use of Swarm...

  11. Study on Cloud Security Based on Trust Spanning Tree Protocol

    Science.gov (United States)

    Lai, Yingxu; Liu, Zenghui; Pan, Qiuyue; Liu, Jing

    2015-09-01

    Attacks executed on Spanning Tree Protocol (STP) expose the weakness of link layer protocols and put the higher layers in jeopardy. Although the problems have been studied for many years and various solutions have been proposed, many security issues remain. To enhance the security and credibility of layer-2 network, we propose a trust-based spanning tree protocol aiming at achieving a higher credibility of LAN switch with a simple and lightweight authentication mechanism. If correctly implemented in each trusted switch, the authentication of trust-based STP can guarantee the credibility of topology information that is announced to other switch in the LAN. To verify the enforcement of the trusted protocol, we present a new trust evaluation method of the STP using a specification-based state model. We implement a prototype of trust-based STP to investigate its practicality. Experiment shows that the trusted protocol can achieve security goals and effectively avoid STP attacks with a lower computation overhead and good convergence performance.

  12. A Quantitative Study on the Correlation between Grade Span Configuration of Sixth Grade Students in Private Florida Schools and Academic Achievement on Standardized Achievement Scores

    Science.gov (United States)

    Rantin, Deborah

    2017-01-01

    The applied dissertation was designed to investigate the three models of grade span configurations of sixth grade and the effects grade span configuration has on results of the standardized achievement scores of sixth grade students in private, Florida schools. Studies that have been conducted on sixth grade students and grade span configuration…

  13. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    Science.gov (United States)

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  15. Inter-comparison of receptor models for PM source apportionment: Case study in an industrial area

    Science.gov (United States)

    Viana, M.; Pandolfi, M.; Minguillón, M. C.; Querol, X.; Alastuey, A.; Monfort, E.; Celades, I.

    2008-05-01

    Receptor modelling techniques are used to identify and quantify the contributions from emission sources to the levels and major and trace components of ambient particulate matter (PM). A wide variety of receptor models are currently available, and consequently the comparability between models should be evaluated if source apportionment data are to be used as input in health effects studies or mitigation plans. Three of the most widespread receptor models (principal component analysis, PCA; positive matrix factorization, PMF; chemical mass balance, CMB) were applied to a single PM10 data set (n=328 samples, 2002-2005) obtained from an industrial area in NE Spain, dedicated to ceramic production. Sensitivity and temporal trend analyses (using the Mann-Kendall test) were applied. Results evidenced the good overall performance of the three models (r2>0.83 and α>0.91×between modelled and measured PM10 mass), with a good agreement regarding source identification and high correlations between input (CMB) and output (PCA, PMF) source profiles. Larger differences were obtained regarding the quantification of source contributions (up to a factor of 4 in some cases). The combined application of different types of receptor models would solve the limitations of each of the models, by constructing a more robust solution based on their strengths. The authors suggest the combined use of factor analysis techniques (PCA, PMF) to identify and interpret emission sources, and to obtain a first quantification of their contributions to the PM mass, and the subsequent application of CMB. Further research is needed to ensure that source apportionment methods are robust enough for application to PM health effects assessments.

  16. Electrical description of a magnetic pole enhanced inductively coupled plasma source: Refinement of the transformer model by reverse electromagnetic modeling

    International Nuclear Information System (INIS)

    Meziani, T.; Colpo, P.; Rossi, F.

    2006-01-01

    The magnetic pole enhanced inductively coupled source (MaPE-ICP) is an innovative low-pressure plasma source that allows for high plasma density and high plasma uniformity, as well as large-area plasma generation. This article presents an electrical characterization of this source, and the experimental measurements are compared to the results obtained after modeling the source by the equivalent circuit of the transformer. In particular, the method applied consists in performing a reverse electromagnetic modeling of the source by providing the measured plasma parameters such as plasma density and electron temperature as an input, and computing the total impedance seen at the primary of the transformer. The impedance results given by the model are compared to the experimental results. This approach allows for a more comprehensive refinement of the electrical model in order to obtain a better fitting of the results. The electrical characteristics of the system, and in particular the total impedance, were measured at the inductive coil antenna (primary of the transformer). The source was modeled electrically by a finite element method, treating the plasma as a conductive load and taking into account the complex plasma conductivity, the value of which was calculated from the electron density and electron temperature measurements carried out previously. The electrical characterization of the inductive excitation source itself versus frequency showed that the source cannot be treated as purely inductive and that the effect of parasitic capacitances must be taken into account in the model. Finally, considerations on the effect of the magnetic core addition on the capacitive component of the coupling are made

  17. Simulation of ultrasonic surface waves with multi-Gaussian and point source beam models

    International Nuclear Information System (INIS)

    Zhao, Xinyu; Schmerr, Lester W. Jr.; Li, Xiongbing; Sedov, Alexander

    2014-01-01

    In the past decade, multi-Gaussian beam models have been developed to solve many complicated bulk wave propagation problems. However, to date those models have not been extended to simulate the generation of Rayleigh waves. Here we will combine Gaussian beams with an explicit high frequency expression for the Rayleigh wave Green function to produce a three-dimensional multi-Gaussian beam model for the fields radiated from an angle beam transducer mounted on a solid wedge. Simulation results obtained with this model are compared to those of a point source model. It is shown that the multi-Gaussian surface wave beam model agrees well with the point source model while being computationally much more efficient

  18. Structure-borne low-frequency noise from multi-span bridges: A prediction method and spatial distribution

    Science.gov (United States)

    Song, X. D.; Wu, D. J.; Li, Q.; Botteldooren, D.

    2016-04-01

    Structure-borne noise from railway bridges at far-field points is an important indicator in environmental noise assessment. However, studies that predict structure-borne noise tend to model only single-span bridges, thus ignoring the sound pressure radiating from adjacent spans. To simulate the noise radiating from multi-span bridges induced by moving vehicles, the vibrations of a multi-span bridge are first obtained from a three-dimensional (3D) vehicle-track-bridge dynamic interaction simulation using the mode superposition method. A procedure based on the 2.5-dimensional (2.5D) boundary element method (BEM) is then presented to promote the efficiency of acoustical computation compared with the 3D BEM. The simulated results obtained from both the single-span and multi-span bridge models are compared with the measured results. The sound predictions calculated from the single-span model are accurate only for a minority of near-field points. In contrast, the sound pressures calculated from the multi-span bridge model match the measured results in both the time and frequency domains for all of the near-field and far-field points. The number of bridge spans required in the noise simulation is then recommended related to the distance between the track center and the field points of interest. The spatial distribution of multi-span structure-borne noise is also studied. The variation in sound pressure levels is insignificant along the length of the bridge, which validates the finding that the sound test section can be selected at an arbitrary plane perpendicular to the multi-span bridge.

  19. Solving the forward problem in EEG source analysis by spherical and fdm head modeling: a comparative analysis - biomed 2009

    NARCIS (Netherlands)

    Vatta, F.; Meneghini, F.; Esposito, F.; Mininel, S.; Di Salle, F.

    2009-01-01

    Neural source localization techniques based on electroencephalography (EEG) use scalp potential data to infer the location of underlying neural activity. This procedure entails modeling the sources of EEG activity and modeling the head volume conduction process to link the modeled sources to the

  20. Introducing a new open source GIS user interface for the SWAT model

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...

  1. Model description for calculating the source term of the Angra 1 environmental control system

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Amaral Neto, J.D.; Salles, M.R.

    1988-01-01

    This work presents the model used for evaluation of source term released from Angra 1 Nuclear Power Plant in case of an accident. After that, an application of the model for the case of a Fuel Assembly Drop Accident Inside the Fuel Handling Building during reactor refueling is presented. (author) [pt

  2. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  3. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  4. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Science.gov (United States)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  5. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.

    2013-12-24

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving the elastodynamic equations while imposing the slip velocity of a kinematic source model as a boundary condition on the fault plane. This is achieved using a 3-D finite difference method in which the rupture kinematics are modelled with the staggered-grid-split-node fault representation method of Dalguer & Day. Dynamic parameters are then estimated from the calculated stress-slip curves and averaged over the fault plane. Our results indicate that fracture energy, static, dynamic and apparent stress drops tend to increase with magnitude. The epistemic uncertainty due to uncertainties in kinematic inversions remains small (ϕ ∼ 0.1 in log10 units), showing that kinematic source models provide robust information to analyse the distribution of average dynamic source parameters. The proposed scaling relations may be useful to constrain friction law parameters in spontaneous dynamic rupture calculations for earthquake source studies, and physics-based near-source ground-motion prediction for seismic hazard and risk mitigation.

  6. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  7. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  8. The continental source of glyoxal estimated by the synergistic use of spaceborne measurements and inverse modelling

    Directory of Open Access Journals (Sweden)

    A. Richter

    2009-11-01

    Full Text Available Tropospheric glyoxal and formaldehyde columns retrieved from the SCIAMACHY satellite instrument in 2005 are used with the IMAGESv2 global chemistry-transport model and its adjoint in a two-compound inversion scheme designed to estimate the continental source of glyoxal. The formaldehyde observations provide an important constraint on the production of glyoxal from isoprene in the model, since the degradation of isoprene constitutes an important source of both glyoxal and formaldehyde. Current modelling studies underestimate largely the observed glyoxal satellite columns, pointing to the existence of an additional land glyoxal source of biogenic origin. We include an extra glyoxal source in the model and we explore its possible distribution and magnitude through two inversion experiments. In the first case, the additional source is represented as a direct glyoxal emission, and in the second, as a secondary formation through the oxidation of an unspecified glyoxal precursor. Besides this extra source, the inversion scheme optimizes the primary glyoxal and formaldehyde emissions, as well as their secondary production from other identified non-methane volatile organic precursors of anthropogenic, pyrogenic and biogenic origin.

    In the first inversion experiment, the additional direct source, estimated at 36 Tg/yr, represents 38% of the global continental source, whereas the contribution of isoprene is equally important (30%, the remainder being accounted for by anthropogenic (20% and pyrogenic fluxes. The inversion succeeds in reducing the underestimation of the glyoxal columns by the model, but it leads to a severe overestimation of glyoxal surface concentrations in comparison with in situ measurements. In the second scenario, the inferred total global continental glyoxal source is estimated at 108 Tg/yr, almost two times higher than the global a priori source. The extra secondary source is the largest contribution to the global glyoxal

  9. Effect of winglets on a first-generation jet transport wing. 6: Stability characteristics for a full-span model at subsonic speeds. [conducted in Langley 8 foot transonic pressure tunnel

    Science.gov (United States)

    Flechner, S. G.

    1979-01-01

    A wind tunnel investigation to identify changes in stability and control characteristics of a model KC-135A due to the addition of winglets is presented. Static longitudinal and lateral-directional aerodynamic characteristics were determined for the model with and without winglets. Variations in the aerodynamic characteristics at various Mach numbers, angles of attack, and angles of slidslip are discussed. The effect of the winglets on the drag and lift coefficients are evaluated and the low speed and high speed characteristics of the model are reported.

  10. Modelling surface energy fluxes over a Dehesa ecosystem using a two-source energy balance model.

    Science.gov (United States)

    Andreu, Ana; Kustas, William. P.; Anderson, Martha C.; Carrara, Arnaud; Patrocinio Gonzalez-Dugo, Maria

    2013-04-01

    The Dehesa is the most widespread agroforestry land-use system in Europe, covering more than 3 million hectares in the Iberian Peninsula and Greece (Grove and Rackham, 2001; Papanastasis, 2004). It is an agro-silvo-pastural ecosystem consisting of widely-spaced oak trees (mostly Quercus ilex L.), combined with crops, pasture and Mediterranean shrubs, and it is recognized as an example of sustainable land use and for his importance in the rural economy (Diaz et al., 1997; Plieninger and Wilbrand, 2001). The ecosystem is influenced by a Mediterranean climate, with recurrent and severe droughts. Over the last decades the Dehesa has faced multiple environmental threats, derived from intensive agricultural use and socio-economic changes, which have caused environmental degradation of the area, namely reduction in tree density and stocking rates, changes in soil properties and hydrological processes and an increase of soil erosion (Coelho et al. 2004; Schnabel and Ferreira, 2004; Montoya 1998; Pulido and Díaz, 2005). Understanding the hydrological, atmospheric and physiological processes that affect the functioning of the ecosystem will improve the management and conservation of the Dehesa. One of the key metrics in assessing ecosystem health, particularly in this water-limited environment, is the capability of monitoring evaporation (ET). To make large area assessments requires the use of remote sensing. Thermal-based energy balance techniques that distinguish soil/substrate and vegetation contributions to the radiative temperature and radiation/turbulent fluxes have proven to be reliable in such semi-arid sparse canopy-cover landscapes. In particular, the two-source energy balance (TSEB) model of Norman et al. (1995) and Kustas and Norman (1999) has shown to be robust for a wide range of partially-vegetated landscapes. The TSEB formulation is evaluated at a flux tower site located in center Spain (Majadas del Tietar, Caceres). Its application in this environment is

  11. Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model

    Institute of Scientific and Technical Information of China (English)

    Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN

    2014-01-01

    Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.

  12. Sensitivity of the coastal tsunami simulation to the complexity of the 2011 Tohoku earthquake source model

    Science.gov (United States)

    Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène

    2016-04-01

    The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).

  13. Neutron activation analysis: Modelling studies to improve the neutron flux of Americium-Beryllium source

    Energy Technology Data Exchange (ETDEWEB)

    Didi, Abdessamad; Dadouch, Ahmed; Tajmouati, Jaouad; Bekkouri, Hassane [Advanced Technology and Integration System, Dept. of Physics, Faculty of Science Dhar Mehraz, University Sidi Mohamed Ben Abdellah, Fez (Morocco); Jai, Otman [Laboratory of Radiation and Nuclear Systems, Dept. of Physics, Faculty of Sciences, Tetouan (Morocco)

    2017-06-15

    Americium–beryllium (Am-Be; n, γ) is a neutron emitting source used in various research fields such as chemistry, physics, geology, archaeology, medicine, and environmental monitoring, as well as in the forensic sciences. It is a mobile source of neutron activity (20 Ci), yielding a small thermal neutron flux that is water moderated. The aim of this study is to develop a model to increase the neutron thermal flux of a source such as Am-Be. This study achieved multiple advantageous results: primarily, it will help us perform neutron activation analysis. Next, it will give us the opportunity to produce radio-elements with short half-lives. Am-Be single and multisource (5 sources) experiments were performed within an irradiation facility with a paraffin moderator. The resulting models mainly increase the thermal neutron flux compared to the traditional method with water moderator.

  14. An Equivalent Source Method for Modelling the Lithospheric Magnetic Field Using Satellite and Airborne Magnetic Data

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    . Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available......We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field...... for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model...

  15. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger

    2004-01-01

    considering reverberation time. However, for the three other parameters evaluated (sound pressure level, clarity index and lateral fraction), the changing diffusivity of the room does not diminish the importance of the directivity. The study therefore shows the importance of considering source directivity......Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels...

  16. A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE

    OpenAIRE

    Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel

    2004-01-01

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife$^{\\circledR}$. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3$^{\\rm o}$ with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photons trajectories reaching the out...

  17. Modeled Sources, Transport, and Accumulation of Dissolved Solids in Water Resources of the Southwestern United States.

    Science.gov (United States)

    Anning, David W

    2011-10-01

    Information on important source areas for dissolved solids in streams of the southwestern United States, the relative share of deliveries of dissolved solids to streams from natural and human sources, and the potential for salt accumulation in soil or groundwater was developed using a SPAtially Referenced Regressions On Watershed attributes model. Predicted area-normalized reach-catchment delivery rates of dissolved solids to streams ranged from Salton Sea accounting unit.

  18. Time and resource limits on working memory: cross-age consistency in counting span performance.

    Science.gov (United States)

    Ransdell, Sarah; Hecht, Steven

    2003-12-01

    This longitudinal study separated resource demand effects from those of retention interval in a counting span task among 100 children tested in grade 2 and again in grades 3 and 4. A last card large counting span condition had an equivalent memory load to a last card small, but the last card large required holding the count over a longer retention interval. In all three waves of assessment, the last card large condition was found to be less accurate than the last card small. A model predicting reading comprehension showed that age was a significant predictor when entered first accounting for 26% of the variance, but counting span accounted for a further 22% of the variance. Span at Wave 1 accounted for significant unique variance at Wave 2 and at Wave 3. Results were similar for math calculation with age accounting for 31% of the variance and counting span accounting for a further 34% of the variance. Span at Wave 1 explained unique variance in math at Wave 2 and at Wave 3.

  19. Quantification of source-term profiles from near-field geochemical models

    International Nuclear Information System (INIS)

    McKinley, I.G.

    1985-01-01

    A geochemical model of the near-field is described which quantitatively treats the processes of engineered barrier degradation, buffering of aqueous chemistry by solid phases, nuclide solubilization and transport through the near-field and release to the far-field. The radionuclide source-terms derived from this model are compared with those from a simpler model used for repository safety analysis. 10 refs., 2 figs., 2 tabs

  20. Indoor Positioning Using Nonparametric Belief Propagation Based on Spanning Trees

    Directory of Open Access Journals (Sweden)

    Savic Vladimir

    2010-01-01

    Full Text Available Nonparametric belief propagation (NBP is one of the best-known methods for cooperative localization in sensor networks. It is capable of providing information about location estimation with appropriate uncertainty and to accommodate non-Gaussian distance measurement errors. However, the accuracy of NBP is questionable in loopy networks. Therefore, in this paper, we propose a novel approach, NBP based on spanning trees (NBP-ST created by breadth first search (BFS method. In addition, we propose a reliable indoor model based on obtained measurements in our lab. According to our simulation results, NBP-ST performs better than NBP in terms of accuracy and communication cost in the networks with high connectivity (i.e., highly loopy networks. Furthermore, the computational and communication costs are nearly constant with respect to the transmission radius. However, the drawbacks of proposed method are a little bit higher computational cost and poor performance in low-connected networks.

  1. Certification of model spectrometric alpha sources (MSAS) and problems of the MSAS system improvement

    International Nuclear Information System (INIS)

    Belyatskij, A.F.; Gejdel'man, A.M.; Egorov, Yu.S.; Nedovesov, V.G.; Chechev, V.P.

    1984-01-01

    Results of certification of standard spectrometric alpha sources (SSAS) of industrial production are presented: methods for certification by main radiation physical parameters: proper halfwidth of α-lines, activity of radionuclides in the source, energies of α-particle emitting sources and relative intensity of different energy α-particle groups - are analysed. The advantage for the SSAS system improvement - a set of model measures for α-radiation, a collection of interconnected data units on physical, engineering and design characteristics of SSAS, methods for their obtaining and determination, on instruments used, is considered

  2. Assessing the contribution of binaural cues for apparent source width perception via a functional model

    DEFF Research Database (Denmark)

    Käsbach, Johannes; Hahmann, Manuel; May, Tobias

    2016-01-01

    In echoic conditions, sound sources are not perceived as point sources but appear to be expanded. The expansion in the horizontal dimension is referred to as apparent source width (ASW). To elicit this perception, the auditory system has access to fluctuations of binaural cues, the interaural time...... a statistical representation of ITDs and ILDs based on percentiles integrated over time and frequency. The model’s performance was evaluated against psychoacoustic data obtained with noise, speech and music signals in loudspeakerbased experiments. A robust model prediction of ASW was achieved using a cross...

  3. Rate equation modelling of the optically pumped spin-exchange source

    International Nuclear Information System (INIS)

    Stenger, J.; Rith, K.

    1995-01-01

    Sources for spin polarized hydrogen or deuterium, polarized via spin-exchange of a laser optically pumped alkali metal, can be modelled by rate equations. The rate equations for this type of source, operated either with hydrogen or deuterium, are given explicitly with the intention of providing a useful tool for further source optimization and understanding. Laser optical pumping of alkali metal, spin-exchange collisions of hydrogen or deuterium atoms with each other and with alkali metal atoms are included, as well as depolarization due to flow and wall collisions. (orig.)

  4. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  5. SPARROW models used to understand nutrient sources in the Mississippi/Atchafalaya River Basin

    Science.gov (United States)

    Robertson, Dale M.; Saad, David A.

    2013-01-01

    Nitrogen (N) and phosphorus (P) loading from the Mississippi/Atchafalaya River Basin (MARB) has been linked to hypoxia in the Gulf of Mexico. To describe where and from what sources those loads originate, SPAtially Referenced Regression On Watershed attributes (SPARROW) models were constructed for the MARB using geospatial datasets for 2002, including inputs from wastewater treatment plants (WWTPs), and calibration sites throughout the MARB. Previous studies found that highest N and P yields were from the north-central part of the MARB (Corn Belt). Based on the MARB SPARROW models, highest N yields were still from the Corn Belt but centered over Iowa and Indiana, and highest P yields were widely distributed throughout the center of the MARB. Similar to that found in other studies, agricultural inputs were found to be the largest N and P sources throughout most of the MARB: farm fertilizers were the largest N source, whereas farm fertilizers, manure, and urban inputs were dominant P sources. The MARB models enable individual N and P sources to be defined at scales ranging from SPARROW catchments (∼50 km2) to the entire area of the MARB. Inputs of P from WWTPs and urban areas were more important than found in most other studies. Information from this study will help to reduce nutrient loading from the MARB by providing managers with a description of where each of the sources of N and P are most important, thus providing a basis for prioritizing management actions and ultimately reducing the extent of Gulf hypoxia.

  6. The European legal framework regarding e-commercspan>e

    NARCIS (Netherlands)

    Schaub, M.Y.

    2004-01-01

    The year 2000 is a memorable year in the history of e-commercspan>e. This is the year of the so-called 'dot.com shake-out'. The year 2000 is also the year the European Union issued its e-commercspan>e directive. The directive means to regulate but also facilitate e-commercspan>e in the internal market, by laying

  7. SPAN: A Network Providing Integrated, End-to-End, Sensor-to-Database Solutions for Environmental Sciences

    Science.gov (United States)

    Benzel, T.; Cho, Y. H.; Deschon, A.; Gullapalli, S.; Silva, F.

    2009-12-01

    In recent years, advances in sensor network technology have shown great promise to revolutionize environmental data collection. Still, wide spread adoption of these systems by domain experts has been lacking, and these have remained the purview of the engineers who design them. While there are many data logging options for basic data collection in the field currently, scientists are often required to visit the deployment sites to retrieve their data and manually import it into spreadsheets. Some advanced commercial software systems do allow scientists to collect data remotely, but most of these systems only allow point-to-point access, and require proprietary hardware. Furthermore, these commercial solutions preclude the use of sensors from other manufacturers or integration with internet based database repositories and compute engines. Therefore, scientists often must download and manually reformat their data before uploading it to the repositories if they wish to share their data. We present an open-source, low-cost, extensible, turnkey solution called Sensor Processing and Acquisition Network (SPAN) which provides a robust and flexible sensor network service. At the deployment site, SPAN leverages low-power generic embedded processors to integrate variety of commercially available sensor hardware to the network of environmental observation systems. By bringing intelligence close to the sensed phenomena, we can remotely control configuration and re-use, establish rules to trigger sensor activity, manage power requirements, and control the two-way flow of sensed data as well as control information to the sensors. Key features of our design include (1) adoption of a hardware agnostic architecture: our solutions are compatible with several programmable platforms, sensor systems, communication devices and protocols. (2) information standardization: our system supports several popular communication protocols and data formats, and (3) extensible data support: our

  8. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  9. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    International Nuclear Information System (INIS)

    Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai

    2016-01-01

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results

  10. On vortex-airfoil interaction noise including span-end effects, with application to open-rotor aeroacoustics

    Science.gov (United States)

    Roger, Michel; Schram, Christophe; Moreau, Stéphane

    2014-01-01

    A linear analytical model is developed for the chopping of a cylindrical vortex by a flat-plate airfoil, with or without a span-end effect. The major interest is the contribution of the tip-vortex produced by an upstream rotating blade in the rotor-rotor interaction noise mechanism of counter-rotating open rotors. Therefore the interaction is primarily addressed in an annular strip of limited spanwise extent bounding the impinged blade segment, and the unwrapped strip is described in Cartesian coordinates. The study also addresses the interaction of a propeller wake with a downstream wing or empennage. Cylindrical vortices are considered, for which the velocity field is expanded in two-dimensional gusts in the reference frame of the airfoil. For each gust the response of the airfoil is derived, first ignoring the effect of the span end, assimilating the airfoil to a rigid flat plate, with or without sweep. The corresponding unsteady lift acts as a distribution of acoustic dipoles, and the radiated sound is obtained from a radiation integral over the actual extent of the airfoil. In the case of tip-vortex interaction noise in CRORs the acoustic signature is determined for vortex trajectories passing beyond, exactly at and below the tip radius of the impinged blade segment, in a reference frame attached to the segment. In a second step the same problem is readdressed accounting for the effect of span end on the aerodynamic response of a blade tip. This is achieved through a composite two-directional Schwarzschild's technique. The modifications of the distributed unsteady lift and of the radiated sound are discussed. The chained source and radiation models provide physical insight into the mechanism of vortex chopping by a blade tip in free field. They allow assessing the acoustic benefit of clipping the rear rotor in a counter-rotating open-rotor architecture.

  11. Assessment of source-receptor relationships of aerosols: An integrated forward and backward modeling approach

    Science.gov (United States)

    Kulkarni, Sarika

    This dissertation presents a scientific framework that facilitates enhanced understanding of aerosol source -- receptor (S/R) relationships and their impact on the local, regional and global air quality by employing a complementary suite of modeling methods. The receptor -- oriented Positive Matrix Factorization (PMF) technique is combined with Potential Source Contribution Function (PSCF), a trajectory ensemble model, to characterize sources influencing the aerosols measured at Gosan, Korea during spring 2001. It is found that the episodic dust events originating from desert regions in East Asia (EA) that mix with pollution along the transit path, have a significant and pervasive impact on the air quality of Gosan. The intercontinental and hemispheric transport of aerosols is analyzed by a series of emission perturbation simulations with the Sulfur Transport and dEposition Model (STEM), a regional scale Chemical Transport Model (CTM), evaluated with observations from the 2008 NASA ARCTAS field campaign. This modeling study shows that pollution transport from regions outside North America (NA) contributed ˜ 30 and 20% to NA sulfate and BC surface concentration. This study also identifies aerosols transported from Europe, NA and EA regions as significant contributors to springtime Arctic sulfate and BC. Trajectory ensemble models are combined with source region tagged tracer model output to identify the source regions and possible instances of quasi-lagrangian sampled air masses during the 2006 NASA INTEX-B field campaign. The impact of specific emission sectors from Asia during the INTEX-B period is studied with the STEM model, identifying residential sector as potential target for emission reduction to combat global warming. The output from the STEM model constrained with satellite derived aerosol optical depth and ground based measurements of single scattering albedo via an optimal interpolation assimilation scheme is combined with the PMF technique to

  12. Advanced Neutron Source Dynamic Model (ANSDM) code description and user guide

    International Nuclear Information System (INIS)

    March-Leuba, J.

    1995-08-01

    A mathematical model is designed that simulates the dynamic behavior of the Advanced Neutron Source (ANS) reactor. Its main objective is to model important characteristics of the ANS systems as they are being designed, updated, and employed; its primary design goal, to aid in the development of safety and control features. During the simulations the model is also found to aid in making design decisions for thermal-hydraulic systems. Model components, empirical correlations, and model parameters are discussed; sample procedures are also given. Modifications are cited, and significant development and application efforts are noted focusing on examination of instrumentation required during and after accidents to ensure adequate monitoring during transient conditions

  13. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    Science.gov (United States)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  14. Point, surface and volumetric heat sources in the thermal modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    Selective laser melting (SLM) is a powder based additive manufacturing technique suitable for producing high precision metal parts. However, distortions and residual stresses within products arise during SLM because of the high temperature gradients created by the laser heating. Residual stresses limit the load resistance of the product and may even lead to fracture during the built process. It is therefore of paramount importance to predict the level of part distortion and residual stress as a function of SLM process parameters which requires a reliable thermal modelling of the SLM process. Consequently, a key question arises which is how to describe the laser source appropriately. Reasonable simplification of the laser representation is crucial for the computational efficiency of the thermal model of the SLM process. In this paper, first a semi-analytical thermal modelling approach is described. Subsequently, the laser heating is modelled using point, surface and volumetric sources, in order to compare the influence of different laser source geometries on the thermal history prediction of the thermal model. The present work provides guidelines on appropriate representation of the laser source in the thermal modelling of the SLM process.

  15. Structural covariance networks across the life span, from 6 to 94 years of age.

    Science.gov (United States)

    DuPre, Elizabeth; Spreng, R Nathan

    2017-10-01

    Structural covariance examines covariation of gray matter morphology between brain regions and across individuals. Despite significant interest in the influence of age on structural covariance patterns, no study to date has provided a complete life span perspective-bridging childhood with early, middle, and late adulthood-on the development of structural covariance networks. Here, we investigate the life span trajectories of structural covariance in six canonical neurocognitive networks: default, dorsal attention, frontoparietal control, somatomotor, ventral attention, and visual. By combining data from five open-access data sources, we examine the structural covariance trajectories of these networks from 6 to 94 years of age in a sample of 1,580 participants. Using partial least squares, we show that structural covariance patterns across the life span exhibit two significant, age-dependent trends. The first trend is a stable pattern whose integrity declines over the life span. The second trend is an inverted-U that differentiates young adulthood from other age groups. Hub regions, including posterior cingulate cortex and anterior insula, appear particularly influential in the expression of this second age-dependent trend. Overall, our results suggest that structural covariance provides a reliable definition of neurocognitive networks across the life span and reveal both shared and network-specific trajectories.

  16. Splitting of beams caused by multiple connections along the beam span

    NARCIS (Netherlands)

    Leijten, A.J.M.; Salenikovich, A.

    2014-01-01

    : In the past splitting of beams caused by connection perpendicular to grain has drawn attention. Models have mainly being developed considering one mid span connection. Some semi-empirical models assume the splitting capacity to be proportional with the number of connections when sufficiently

  17. Strength analysis of copper gas pipeline span

    OpenAIRE

    Ianevski, Philipp

    2016-01-01

    The purpose of the study was to analyze the stresses in a gas pipeline. While analyzing piping systems located inside building were used. Calculation of the strength of a gas pipeline is done by using information of the thickness of pipe walls, by choosing the suitable material, inner and outer diameter for the pipeline. Data for this thesis was collected through various internet sources and different books. From the study and research, the final results were reached and calculations were ...

  18. How long will my mouse live? Machine learning approaches for prediction of mouse life span.

    Science.gov (United States)

    Swindell, William R; Harper, James M; Miller, Richard A

    2008-09-01

    Prediction of individual life span based on characteristics evaluated at middle-age represents a challenging objective for aging research. In this study, we used machine learning algorithms to construct models that predict life span in a stock of genetically heterogeneous mice. Life-span prediction accuracy of 22 algorithms was evaluated using a cross-validation approach, in which models were trained and tested with distinct subsets of data. Using a combination of body weight and T-cell subset measures evaluated before 2 years of age, we show that the life-span quartile to which an individual mouse belongs can be predicted with an accuracy of 35.3% (+/-0.10%). This result provides a new benchmark for the development of life-span-predictive models, but improvement can be expected through identification of new predictor variables and development of computational approaches. Future work in this direction can provide tools for aging research and will shed light on associations between phenotypic traits and longevity.

  19. A photovoltaic source I/U model suitable for hardware in the loop application

    Directory of Open Access Journals (Sweden)

    Stala Robert

    2017-12-01

    Full Text Available This paper presents a novel, low-complexity method of simulating PV source characteristics suitable for real-time modeling and hardware implementation. The application of the suitable model of the PV source as well as the model of all the PV system components in a real-time hardware gives a safe, fast and low cost method of testing PV systems. The paper demonstrates the concept of the PV array model and the hardware implementation in FPGAs of the system which combines two PV arrays. The obtained results confirm that the proposed model is of low complexity and can be suitable for hardware in the loop (HIL tests of the complex PV system control, with various arrays operating under different conditions.

  20. Modeling of Acoustic Field for a Parametric Focusing Source Using the Spheroidal Beam Equation

    Directory of Open Access Journals (Sweden)

    Yu Lili

    2015-09-01

    Full Text Available A theoretical model of acoustic field for a parametric focusing source on concave spherical surface is proposed. In this model, the source boundary conditions of the Spheroidal Beam Equation (SBE for difference frequency wave excitation were studied. Propagation curves and beam patterns for difference frequency component of the acoustic field are compared with those obtained for Khokhlov-Zabolotskaya-Kuznetsov (KZK model. The results demonstrate that the focused parametric model of SBE is good valid for a large aperture angle in the strongly focused acoustic field. It is also investigated that high directivity and good focal ability with the decreasing of downshift ratio and the increasing of half-aperture angle for the focused parametric model of SBE.

  1. Source rock contributions to the Lower Cretaceous heavy oil accumulations in Alberta: a basin modeling study

    Science.gov (United States)

    Berbesi, Luiyin Alejandro; di Primio, Rolando; Anka, Zahie; Horsfield, Brian; Higley, Debra K.

    2012-01-01

    The origin of the immense oil sand deposits in Lower Cretaceous reservoirs of the Western Canada sedimentary basin is still a matter of debate, specifically with respect to the original in-place volumes and contributing source rocks. In this study, the contributions from the main source rocks were addressed using a three-dimensional petroleum system model calibrated to well data. A sensitivity analysis of source rock definition was performed in the case of the two main contributors, which are the Lower Jurassic Gordondale Member of the Fernie Group and the Upper Devonian–Lower Mississippian Exshaw Formation. This sensitivity analysis included variations of assigned total organic carbon and hydrogen index for both source intervals, and in the case of the Exshaw Formation, variations of thickness in areas beneath the Rocky Mountains were also considered. All of the modeled source rocks reached the early or main oil generation stages by 60 Ma, before the onset of the Laramide orogeny. Reconstructed oil accumulations were initially modest because of limited trapping efficiency. This was improved by defining lateral stratigraphic seals within the carrier system. An additional sealing effect by biodegraded oil may have hindered the migration of petroleum in the northern areas, but not to the east of Athabasca. In the latter case, the main trapping controls are dominantly stratigraphic and structural. Our model, based on available data, identifies the Gordondale source rock as the contributor of more than 54% of the oil in the Athabasca and Peace River accumulations, followed by minor amounts from Exshaw (15%) and other Devonian to Lower Jurassic source rocks. The proposed strong contribution of petroleum from the Exshaw Formation source rock to the Athabasca oil sands is only reproduced by assuming 25 m (82 ft) of mature Exshaw in the kitchen areas, with original total organic carbon of 9% or more.

  2. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  3. A numerical model of the mirror electron cyclotron resonance MECR source

    International Nuclear Information System (INIS)

    Hellblom, G.

    1986-03-01

    Results from numerical modeling of a new type of ion source are presented. The plasma in this source is produced by electron cyclotron resonance in a strong conversion magnetic field. Experiments have shown that a well-defined plasma column, extended along the magnetic field (z-axis) can be produced. The electron temperature and the densities of the various plasma particles have been found to have a strong z-position dependence. With the numerical model, a simulation of the evolution of the composition of the plasma as a function of z is made. A qualitative agreement with experimental data can be obtained for certain parameter regimes. (author)

  4. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    1997-01-01

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  5. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  6. X-ray spectral models of Galactic bulge sources - the emission-line factor

    International Nuclear Information System (INIS)

    Vrtilek, S.D.; Swank, J.H.; Kallman, T.R.

    1988-01-01

    Current difficulties in finding unique and physically meaningful models for the X-ray spectra of Galactic bulge sources are exacerbated by the presence of strong, variable emission and absorption features that are not resolved by the instruments observing them. Nine Einstein solid state spectrometer (SSS) observations of five Galactic bulge sources are presented for which relatively high resolution objective grating spectrometer (OGS) data have been published. It is found that in every case the goodness of fit of simple models to SSS data is greatly improved by adding line features identified in the OGS that cannot be resolved by the SSS but nevertheless strongly influence the spectra observed by SSS. 32 references

  7. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  8. Life Span Extension and Neuronal Cell Protection by Drosophila Nicotinamidase*S⃞

    Science.gov (United States)

    Balan, Vitaly; Miller, Gregory S.; Kaplun, Ludmila; Balan, Karina; Chong, Zhao-Zhong; Li, Faqi; Kaplun, Alexander; VanBerkum, Mark F. A.; Arking, Robert; Freeman, D. Carl; Maiese, Kenneth; Tzivion, Guri

    2008-01-01

    The life span of model organisms can be modulated by environmental conditions that influence cellular metabolism, oxidation, or DNA integrity. The yeast nicotinamidase gene pnc1 was identified as a key transcriptional target and mediator of calorie restriction and stress-induced life span extension. PNC1 is thought to exert its effect on yeast life span by modulating cellular nicotinamide and NAD levels, resulting in increased activity of Sir2 family class III histone deacetylases. In Caenorhabditis elegans, knockdown of a pnc1 homolog was shown recently to shorten the worm life span, whereas its overexpression increased survival under conditions of oxidative stress. The function and regulation of nicotinamidases in higher organisms has not been determined. Here, we report the identification and biochemical characterization of the Drosophila nicotinamidase, D-NAAM, and demonstrate that its overexpression significantly increases median and maximal fly life span. The life span extension was reversed in Sir2 mutant flies, suggesting Sir2 dependence. Testing for physiological effectors of D-NAAM in Drosophila S2 cells, we identified oxidative stress as a primary regulator, both at the transcription level and protein activity. In contrast to the yeast model, stress factors such as high osmolarity and heat shock, calorie restriction, or inhibitors of TOR and phosphatidylinositol 3-kinase pathways do not appear to regulate D-NAAM in S2 cells. Interestingly, the expression of D-NAAM in human neuronal cells conferred protection from oxidative stress-induced cell death in a sirtuin-dependent manner. Together, our findings establish a life span extending the ability of nicotinamidase in flies and offer a role for nicotinamide-modulating genes in oxidative stress regulated pathways influencing longevity and neuronal cell survival. PMID:18678867

  9. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    Energy Technology Data Exchange (ETDEWEB)

    Y. Chen

    2001-12-19

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long

  10. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    International Nuclear Information System (INIS)

    Y. Chen

    2001-01-01

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long

  11. A modified receptor model for source apportionment of heavy metal pollution in soil.

    Science.gov (United States)

    Huang, Ying; Deng, Meihua; Wu, Shaofu; Japenga, Jan; Li, Tingqiang; Yang, Xiaoe; He, Zhenli

    2018-07-15

    Source apportionment is a crucial step toward reduction of heavy metal pollution in soil. Existing methods are generally based on receptor models. However, overestimation or underestimation occurs when they are applied to heavy metal source apportionment in soil. Therefore, a modified model (PCA-MLRD) was developed, which is based on principal component analysis (PCA) and multiple linear regression with distance (MLRD). This model was applied to a case study conducted in a peri-urban area in southeast China where soils were contaminated by arsenic (As), cadmium (Cd), mercury (Hg) and lead (Pb). Compared with existing models, PCA-MLRD is able to identify specific sources and quantify the extent of influence for each emission. The zinc (Zn)-Pb mine was identified as the most important anthropogenic emission, which affected approximately half area for Pb and As accumulation, and approximately one third for Cd. Overall, the influence extent of the anthropogenic emissions decreased in the order of mine (3 km) > dyeing mill (2 km) ≈ industrial hub (2 km) > fluorescent factory (1.5 km) > road (0.5 km). Although algorithm still needs to improved, the PCA-MLRD model has the potential to become a useful tool for heavy metal source apportionment in soil. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Cell sources for in vitro human liver cell culture models

    Science.gov (United States)

    Freyer, Nora; Damm, Georg; Seehofer, Daniel; Knöspel, Fanny

    2016-01-01

    In vitro liver cell culture models are gaining increasing importance in pharmacological and toxicological research. The source of cells used is critical for the relevance and the predictive value of such models. Primary human hepatocytes (PHH) are currently considered to be the gold standard for hepatic in vitro culture models, since they directly reflect the specific metabolism and functionality of the human liver; however, the scarcity and difficult logistics of PHH have driven researchers to explore alternative cell sources, including liver cell lines and pluripotent stem cells. Liver cell lines generated from hepatomas or by genetic manipulation are widely used due to their good availability, but they are generally altered in certain metabolic functions. For the past few years, adult and pluripotent stem cells have been attracting increasing attention, due their ability to proliferate and to differentiate into hepatocyte-like cells in vitro. However, controlling the differentiation of these cells is still a challenge. This review gives an overview of the major human cell sources under investigation for in vitro liver cell culture models, including primary human liver cells, liver cell lines, and stem cells. The promises and challenges of different cell types are discussed with a focus on the complex 2D and 3D culture approaches under investigation for improving liver cell functionality in vitro. Finally, the specific application options of individual cell sources in pharmacological research or disease modeling are described. PMID:27385595

  13. Electrode spanning with partial tripolar stimulation mode in cochlear implants.

    Science.gov (United States)

    Wu, Ching-Chih; Luo, Xin

    2014-12-01

    The perceptual effects of electrode spanning (i.e., the use of nonadjacent return electrodes) in partial tripolar (pTP) mode were tested on a main electrode EL8 in five cochlear implant (CI) users. Current focusing was controlled by σ (the ratio of current returned within the cochlea), and current steering was controlled by α (the ratio of current returned to the basal electrode). Experiment 1 tested whether asymmetric spanning with α = 0.5 can create additional channels around standard pTP stimuli. It was found that in general, apical spanning (i.e., returning current to EL6 rather than EL7) elicited a pitch between those of standard pTP stimuli on main electrodes EL8 and EL9, while basal spanning (i.e., returning current to EL10 rather than EL9) elicited a pitch between those of standard pTP stimuli on main electrodes EL7 and EL8. The pitch increase caused by apical spanning was more salient than the pitch decrease caused by basal spanning. To replace the standard pTP channel on the main electrode EL8 when EL7 or EL9 is defective, experiment 2 tested asymmetrically spanned pTP stimuli with various α, and experiment 3 tested symmetrically spanned pTP stimuli with various σ. The results showed that pitch increased with decreasing α in asymmetric spanning, or with increasing σ in symmetric spanning. Apical spanning with α around 0.69 and basal spanning with α around 0.38 may both elicit a similar pitch as the standard pTP stimulus. With the same σ, the symmetrically spanned pTP stimulus was higher in pitch than the standard pTP stimulus. A smaller σ was thus required for symmetric spanning to match the pitch of the standard pTP stimulus. In summary, electrode spanning is an effective field-shaping technique that is useful for adding spectral channels and handling defective electrodes with CIs.

  14. Revealing transboundary and local air pollutant sources affecting Metro Manila through receptor modeling studies

    International Nuclear Information System (INIS)

    Pabroa, Preciosa Corazon B.; Bautista VII, Angel T.; Santos, Flora L.; Racho, Joseph Michael D.

    2011-01-01

    Ambient fine particulate matter (PM 2 .5) levels at the Metro Manila air sampling stations of the Philippine Nuclear Research Research Institute were found to be above the WHO guideline value of 10 μg m 3 indicating, in general, very poor air quality in the area. The elemental components of the fine particulate matter were obtained using the energy-dispersive x-ray fluorescence spectrometry. Positive matrix factorization, a receptor modelling tool, was used to identify and apportion air pollution sources. Location of probable transboundary air pollutants were evaluated using HYSPLIT (Hybrid Single Particle Lagrangian Integrated Trajectory Model) while location of probable local air pollutant sources were determined using the conditional probability function (CPF). Air pollutant sources can either be natural or anthropogenic. This study has shown natural air pollutant sources such as volcanic eruptions from Bulusan volcano in 2006 and from Anatahan volcano in 2005 to have impacted on the region. Fine soils was shown to have originated from China's Mu US Desert some time in 2004. Smoke in the fine fraction in 2006 show indications of coming from forest fires in Sumatra and Borneo. Fine particulate Pb in Valenzuela was shown to be coming from the surrounding area. Many more significant air pollution impacts can be evaluated with the identification of probable air pollutant sources with the use of elemental fingerprints and locating these sources with the use of HYSPLIT and CPF. (author)

  15. Comparing predictive models of glioblastoma multiforme built using multi-institutional and local data sources.

    Science.gov (United States)

    Singleton, Kyle W; Hsu, William; Bui, Alex A T

    2012-01-01

    The growing amount of electronic data collected from patient care and clinical trials is motivating the creation of national repositories where multiple institutions share data about their patient cohorts. Such efforts aim to provide sufficient sample sizes for data mining and predictive modeling, ultimately improving treatment recommendations and patient outcome prediction. While these repositories offer the potential to improve our understanding of a disease, potential issues need to be addressed to ensure that multi-site data and resultant predictive models are useful to non-contributing institutions. In this paper we examine the challenges of utilizing National Cancer Institute datasets for modeling glioblastoma multiforme. We created several types of prognostic models and compared their results against models generated using data solely from our institution. While overall model performance between the data sources was similar, different variables were selected during model generation, suggesting that mapping data resources between models is not a straightforward issue.

  16. Source-term development for a contaminant plume for use by multimedia risk assessment models

    International Nuclear Information System (INIS)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    1999-01-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equal importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool

  17. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    International Nuclear Information System (INIS)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P.

    2012-09-01

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  18. Using Bayesian Belief Network (BBN) modelling for Rapid Source Term Prediction. RASTEP Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Knochenhauer, M.; Swaling, V.H.; Alfheim, P. [Scandpower AB, Sundbyberg (Sweden)

    2012-09-15

    The project is connected to the development of RASTEP, a computerized source term prediction tool aimed at providing a basis for improving off-site emergency management. RASTEP uses Bayesian belief networks (BBN) to model severe accident progression in a nuclear power plant in combination with pre-calculated source terms (i.e., amount, timing, and pathway of released radio-nuclides). The output is a set of possible source terms with associated probabilities. In the NKS project, a number of complex issues associated with the integration of probabilistic and deterministic analyses are addressed. This includes issues related to the method for estimating source terms, signal validation, and sensitivity analysis. One major task within Phase 1 of the project addressed the problem of how to make the source term module flexible enough to give reliable and valid output throughout the accident scenario. Of the alternatives evaluated, it is recommended that RASTEP is connected to a fast running source term prediction code, e.g., MARS, with a possibility of updating source terms based on real-time observations. (Author)

  19. Assessing the impact of different sources of topographic data on 1-D hydraulic modelling of floods

    Science.gov (United States)

    Ali, A. Md; Solomatine, D. P.; Di Baldassarre, G.

    2015-01-01

    Topographic data, such as digital elevation models (DEMs), are essential input in flood inundation modelling. DEMs can be derived from several sources either through remote sensing techniques (spaceborne or airborne imagery) or from traditional methods (ground survey). The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), the Shuttle Radar Topography Mission (SRTM), the light detection and ranging (lidar), and topographic contour maps are some of the most commonly used sources of data for DEMs. These DEMs are characterized by different precision and accuracy. On the one hand, the spatial resolution of low-cost DEMs from satellite imagery, such as ASTER and SRTM, is rather coarse (around 30 to 90 m). On the other hand, the lidar technique is able to produce high-resolution DEMs (at around 1 m), but at a much higher cost. Lastly, contour mapping based on ground survey is time consuming, particularly for higher scales, and may not be possible for some remote areas. The use of these different sources of DEM obviously affects the results of flood inundation models. This paper shows and compares a number of 1-D hydraulic models developed using HEC-RAS as model code and the aforementioned sources of DEM as geometric input. To test model selection, the outcomes of the 1-D models were also compared, in terms of flood water levels, to the results of 2-D models (LISFLOOD-FP). The study was carried out on a reach of the Johor River, in Malaysia. The effect of the different sources of DEMs (and different resolutions) was investigated by considering the performance of the hydraulic models in simulating flood water levels as well as inundation maps. The outcomes of our study show that the use of different DEMs has serious implications to the results of hydraulic models. The outcomes also indicate that the loss of model accuracy due to re-sampling the highest resolution DEM (i.e. lidar 1 m) to lower resolution is much less than the loss of model accuracy due

  20. A modeling study of saltwater intrusion in the Andarax delta area using multiple data sources

    DEFF Research Database (Denmark)

    Antonsson, Arni Valur; Engesgaard, Peter Knudegaard; Jorreto, Sara

    context. The validity of a conceptual model is determined by different factors, where both data quantity and quality is of crucial importance. Often, when dealing with saltwater intrusion, data is limited. Therefore, using different sources (and types) of data can be beneficial and increase......In groundwater model development, construction of the conceptual model is one of the (initial and) critical aspects that determines the model reliability and applicability in terms of e.g. system (hydrogeological) understanding, groundwater quality predictions, and general use in water resources...

  1. Unified Impedance Model of Grid-Connected Voltage-Source Converters

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Harnefors, Lennart; Blaabjerg, Frede

    2018-01-01

    This paper proposes a unified impedance model of grid-connected voltage-source converters for analyzing dynamic influences of the Phase-Locked Loop (PLL) and current control. The mathematical relations between the impedance models in the different domains are first explicitly revealed by means...... of complex transfer functions and complex space vectors. A stationary (αβ-) frame impedance model is then proposed, which not only predicts the stability impact of the PLL, but reveals also its frequency coupling effect explicitly. Furthermore, the impedance shaping effect of the PLL on the current control...... results and theoretical analysis confirm the effectiveness of the stationary-frame impedance model....

  2. Monte Carlo modeling of neutron imaging at the SINQ spallation source

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.; Lehmann, E.H.; Pitcher, E.J.; McKinney, G.W.

    2003-01-01

    Modeling of the Swiss Spallation Neutron Source (SINQ) has been used to demonstrate the neutron radiography capability of the newly released MPI-version of the MCNPX Monte Carlo code. A detailed MCNPX model was developed of SINQ and its associated neutron transmission radiography (NEUTRA) facility. Preliminary validation of the model was performed by comparing the calculated and measured neutron fluxes in the NEUTRA beam line, and a simulated radiography image was generated for a sample consisting of steel tubes containing different materials. This paper describes the SINQ facility, provides details of the MCNPX model, and presents preliminary results of the neutron imaging. (authors)

  3. A Monte Carlo multiple source model applied to radiosurgery narrow photon beams

    International Nuclear Information System (INIS)

    Chaves, A.; Lopes, M.C.; Alves, C.C.; Oliveira, C.; Peralta, L.; Rodrigues, P.; Trindade, A.

    2004-01-01

    Monte Carlo (MC) methods are nowadays often used in the field of radiotherapy. Through successive steps, radiation fields are simulated, producing source Phase Space Data (PSD) that enable a dose calculation with good accuracy. Narrow photon beams used in radiosurgery can also be simulated by MC codes. However, the poor efficiency in simulating these narrow photon beams produces PSD whose quality prevents calculating dose with the required accuracy. To overcome this difficulty, a multiple source model was developed that enhances the quality of the reconstructed PSD, reducing also the time and storage capacities. This multiple source model was based on the full MC simulation, performed with the MC code MCNP4C, of the Siemens Mevatron KD2 (6 MV mode) linear accelerator head and additional collimators. The full simulation allowed the characterization of the particles coming from the accelerator head and from the additional collimators that shape the narrow photon beams used in radiosurgery treatments. Eight relevant photon virtual sources were identified from the full characterization analysis. Spatial and energy distributions were stored in histograms for the virtual sources representing the accelerator head components and the additional collimators. The photon directions were calculated for virtual sources representing the accelerator head components whereas, for the virtual sources representing the additional collimators, they were recorded into histograms. All these histograms were included in the MC code, DPM code and using a sampling procedure that reconstructed the PSDs, dose distributions were calculated in a water phantom divided in 20000 voxels of 1x1x5 mm 3 . The model accurately calculates dose distributions in the water phantom for all the additional collimators; for depth dose curves, associated errors at 2σ were lower than 2.5% until a depth of 202.5 mm for all the additional collimators and for profiles at various depths, deviations between measured

  4. A linear ion optics model for extraction from a plasma ion source

    International Nuclear Information System (INIS)

    Dietrich, J.

    1987-01-01

    A linear ion optics model for ion extraction from a plasma ion source is presented, based on the paraxial equations which account for lens effects, space charge and finite source ion temperature. This model is applied to three- and four-electrode extraction systems with circular apertures. The results are compared with experimental data and numerical calculations in the literature. It is shown that the improved calculations of space charge effects and lens effects allow better agreement to be obtained than in earlier linear optics models. A principal result is that the model presented here describes the dependence of the optimum perveance on the aspect ratio in a manner similar to the nonlinear optics theory. (orig.)

  5. The SSI TOOLBOX Source Term Model SOSIM - Screening for important radionuclides and parameter sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Avila Moreno, R.; Barrdahl, R.; Haegg, C.

    1995-05-01

    The main objective of the present study was to carry out a screening and a sensitivity analysis of the SSI TOOLBOX source term model SOSIM. This model is a part of the SSI TOOLBOX for radiological impact assessment of the Swedish disposal concept for high-level waste KBS-3. The outputs of interest for this purpose were: the total released fraction, the time of total release, the time and value of maximum release rate, the dose rates after direct releases of the biosphere. The source term equations were derived and simple equations and methods were proposed for calculation of these. A literature survey has been performed in order to determine a characteristic variation range and a nominal value for each model parameter. In order to reduce the model uncertainties the authors recommend a change in the initial boundary condition for solution of the diffusion equation for highly soluble nuclides. 13 refs.

  6. Consistent modelling of wind turbine noise propagation from source to receiver

    DEFF Research Database (Denmark)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    2017-01-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine...... propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine....... and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound...

  7. Beam-based model of broad-band impedance of the Diamond Light Source

    Science.gov (United States)

    Smaluk, Victor; Martin, Ian; Fielder, Richard; Bartolini, Riccardo

    2015-06-01

    In an electron storage ring, the interaction between a single-bunch beam and a vacuum chamber impedance affects the beam parameters, which can be measured rather precisely. So we can develop beam-based numerical models of longitudinal and transverse impedances. At the Diamond Light Source (DLS) to get the model parameters, a set of measured data has been used including current-dependent shift of betatron tunes and synchronous phase, chromatic damping rates, and bunch lengthening. A matlab code for multiparticle tracking has been developed. The tracking results and analytical estimations are quite consistent with the measured data. Since Diamond has the shortest natural bunch length among all light sources in standard operation, the studies of collective effects with short bunches are relevant to many facilities including next generation of light sources.

  8. Beam-based model of broad-band impedance of the Diamond Light Source

    Directory of Open Access Journals (Sweden)

    Victor Smaluk

    2015-06-01

    Full Text Available In an electron storage ring, the interaction between a single-bunch beam and a vacuum chamber impedance affects the beam parameters, which can be measured rather precisely. So we can develop beam-based numerical models of longitudinal and transverse impedances. At the Diamond Light Source (DLS to get the model parameters, a set of measured data has been used including current-dependent shift of betatron tunes and synchronous phase, chromatic damping rates, and bunch lengthening. A matlab code for multiparticle tracking has been developed. The tracking results and analytical estimations are quite consistent with the measured data. Since Diamond has the shortest natural bunch length among all light sources in standard operation, the studies of collective effects with short bunches are relevant to many facilities including next generation of light sources.

  9. Paternal smoking habits affect the reproductive life span of daughters

    DEFF Research Database (Denmark)

    Fukuda, Misao; Fukuda, Kiyomi; Shimizu, Takashi

    2011-01-01

    The present study assessed whether the smoking habits of fathers around the time of conception affected the period in which daughters experienced menstrual cycles (i.e., the reproductive life span). The study revealed that the smoking habits of the farther shortened the daughters' reproductive life...... span compared with daughters whose fathers did not smoke....

  10. 23 CFR 650.809 - Movable span bridges.

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Movable span bridges. 650.809 Section 650.809 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS BRIDGES, STRUCTURES, AND HYDRAULICS Navigational Clearances for Bridges § 650.809 Movable span bridges. A fixed bridge...

  11. Developmental Dyslexia: The Visual Attention Span Deficit Hypothesis

    Science.gov (United States)

    Bosse, Marie-Line; Tainturier, Marie Josephe; Valdois, Sylviane

    2007-01-01

    The visual attention (VA) span is defined as the amount of distinct visual elements which can be processed in parallel in a multi-element array. Both recent empirical data and theoretical accounts suggest that a VA span deficit might contribute to developmental dyslexia, independently of a phonological disorder. In this study, this hypothesis was…

  12. Increasing Endurance by Building Fluency: Precision Teaching Attention Span.

    Science.gov (United States)

    Binder, Carl; And Others

    1990-01-01

    Precision teaching techniques can be used to chart students' attention span or endurance. Individual differences in attention span can then be better understood and dealt with effectively. The effects of performance duration on performance level, on error rates, and on learning rates are discussed. Implications for classroom practice are noted.…

  13. On the number of spanning trees in random regular graphs

    DEFF Research Database (Denmark)

    Greenhill, Catherine; Kwan, Matthew; Wind, David Kofoed

    2014-01-01

    Let d >= 3 be a fixed integer. We give an asympotic formula for the expected number of spanning trees in a uniformly random d-regular graph with n vertices. (The asymptotics are as n -> infinity, restricted to even n if d is odd.) We also obtain the asymptotic distribution of the number of spanning...

  14. Boundary Spanning in Higher Education: How Universities Can Enable Success

    Science.gov (United States)

    Skolaski, Jennifer Pauline

    2012-01-01

    Purpose: The purpose of this research is to better understand the identity and work of academic and extension staff who have boundary spanning responsibilities. The results will help universities, especially public land-grant universities with an outreach mission, to create stronger policies and systems to support boundary spanning staff members…

  15. Experimental validation of a kilovoltage x-ray source model for computing imaging dose

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick, E-mail: yannick.poirier@cancercare.mb.ca [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada); Kouznetsov, Alexei; Koger, Brandon [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Tambasco, Mauro, E-mail: mtambasco@mail.sdsu.edu [Department of Physics, San Diego State University, San Diego, California 92182-1233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)

    2014-04-15

    Purpose: To introduce and validate a kilovoltage (kV) x-ray source model and characterization method to compute absorbed dose accrued from kV x-rays. Methods: The authors propose a simplified virtual point source model and characterization method for a kV x-ray source. The source is modeled by: (1) characterizing the spatial spectral and fluence distributions of the photons at a plane at the isocenter, and (2) creating a virtual point source from which photons are generated to yield the derived spatial spectral and fluence distribution at isocenter of an imaging system. The spatial photon distribution is determined by in-air relative dose measurements along the transverse (x) and radial (y) directions. The spectrum is characterized using transverse axis half-value layer measurements and the nominal peak potential (kVp). This source modeling approach is used to characterize a Varian{sup ®} on-board-imager (OBI{sup ®}) for four default cone-beam CT beam qualities: beams using a half bowtie filter (HBT) with 110 and 125 kVp, and a full bowtie filter (FBT) with 100 and 125 kVp. The source model and characterization method was validated by comparing dose computed by the authors’ inhouse software (kVDoseCalc) to relative dose measurements in a homogeneous and a heterogeneous block phantom comprised of tissue, bone, and lung-equivalent materials. Results: The characterized beam qualities and spatial photon distributions are comparable to reported values in the literature. Agreement between computed and measured percent depth-dose curves is ⩽2% in the homogeneous block phantom and ⩽2.5% in the heterogeneous block phantom. Transverse axis profiles taken at depths of 2 and 6 cm in the homogeneous block phantom show an agreement within 4%. All transverse axis dose profiles in water, in bone, and lung-equivalent materials for beams using a HBT, have an agreement within 5%. Measured profiles of FBT beams in bone and lung-equivalent materials were higher than their

  16. Testing and intercomparison of model predictions of radionuclide migration from a hypothetical area source

    International Nuclear Information System (INIS)

    O'Brien, R.S.; Yu, C.; Zeevaert, T.; Olyslaegers, G.; Amado, V.; Setlow, L.W.; Waggitt, P.W.

    2008-01-01

    This work was carried out as part of the International Atomic Energy Agency's EMRAS program. One aim of the work was to develop scenarios for testing computer models designed for simulating radionuclide migration in the environment, and to use these scenarios for testing the models and comparing predictions from different models. This paper presents the results of the development and testing of a hypothetical area source of NORM waste/residue using two complex computer models and one screening model. There are significant differences in the methods used to model groundwater flow between the complex models. The hypothetical source was used because of its relative simplicity and because of difficulties encountered in finding comprehensive, well-validated data sets for real sites. The source consisted of a simple repository of uniform thickness, with 1 Bq g -1 of uranium-238 ( 238 U) (in secular equilibrium with its decay products) distributed uniformly throughout the waste. These approximate real situations, such as engineered repositories, waste rock piles, tailings piles and landfills. Specification of the site also included the physical layout, vertical stratigraphic details, soil type for each layer of material, precipitation and runoff details, groundwater flow parameters, and meteorological data. Calculations were carried out with and without a cover layer of clean soil above the waste, for people working and living at different locations relative to the waste. The predictions of the two complex models showed several differences which need more detailed examination. The scenario is available for testing by other modelers. It can also be used as a planning tool for remediation work or for repository design, by changing the scenario parameters and running the models for a range of different inputs. Further development will include applying models to real scenarios and integrating environmental impact assessment methods with the safety assessment tools currently

  17. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    OpenAIRE

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2010-01-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth & Pope with Durbin's method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous ...

  18. Process performance and modelling of anaerobic digestion using source-sorted organic household waste

    DEFF Research Database (Denmark)

    Khoshnevisan, Benyamin; Tsapekos, Panagiotis; Alvarado-Morales, Merlin

    2018-01-01

    Three distinctive start-up strategies of biogas reactors fed with source-sorted organic fraction of municipal solid waste were investigated to reveal the most reliable procedure for rapid process stabilization. Moreover, the experimental results were compared with mathematical modeling outputs....... The combination of both experimental and modelling/simulation succeeded in optimizing the start-up process for anaerobic digestion of biopulp under mesophilic conditions....

  19. Nanogels as imaging agents for modalities spanning the electromagnetic spectrum.

    Science.gov (United States)

    Chan, Minnie; Almutairi, Adah

    2016-01-21

    In the past few decades, advances in imaging equipment and protocols have expanded the role of imaging in in vivo diagnosis and disease management, especially in cancer. Traditional imaging agents have rapid clearance and low specificity for disease detection. To improve accuracy in disease identification, localization and assessment, novel nanomaterials are frequently explored as imaging agents to achieve high detection specificity and sensitivity. A promising material for this purpose are hydrogel nanoparticles, whose high hydrophilicity, biocompatibility, and tunable size in the nanometer range make them ideal for imaging. These nanogels (10 to 200 nm) can circumvent uptake by the reticuloendothelial system, allowing longer circulation times than small molecules. In addition, their size/surface properties can be further tailored to optimize their pharmacokinetics for imaging of a particular disease. Herein, we provide a comprehensive review of nanogels as imaging agents in various modalities with sources of signal spanning the electromagnetic spectrum, including MRI, NIR, UV-vis, and PET. Many materials and formulation methods will be reviewed to highlight the versatility of nanogels as imaging agents.

  20. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.