WorldWideScience

Sample records for equivalent source approach

  1. Combining virtual observatory and equivalent source dipole approaches to describe the geomagnetic field with Swarm measurements

    Science.gov (United States)

    Saturnino, Diana; Langlais, Benoit; Amit, Hagay; Civet, François; Mandea, Mioara; Beucler, Éric

    2018-03-01

    A detailed description of the main geomagnetic field and of its temporal variations (i.e., the secular variation or SV) is crucial to understanding the geodynamo. Although the SV is known with high accuracy at ground magnetic observatory locations, the globally uneven distribution of the observatories hampers the determination of a detailed global pattern of the SV. Over the past two decades, satellites have provided global surveys of the geomagnetic field which have been used to derive global spherical harmonic (SH) models through some strict data selection schemes to minimise external field contributions. However, discrepancies remain between ground measurements and field predictions by these models; indeed the global models do not reproduce small spatial scales of the field temporal variations. To overcome this problem we propose to directly extract time series of the field and its temporal variation from satellite measurements as it is done at observatory locations. We follow a Virtual Observatory (VO) approach and define a global mesh of VOs at satellite altitude. For each VO and each given time interval we apply an Equivalent Source Dipole (ESD) technique to reduce all measurements to a unique location. Synthetic data are first used to validate the new VO-ESD approach. Then, we apply our scheme to data from the first two years of the Swarm mission. For the first time, a 2.5° resolution global mesh of VO time series is built. The VO-ESD derived time series are locally compared to ground observations as well as to satellite-based model predictions. Our approach is able to describe detailed temporal variations of the field at local scales. The VO-ESD time series are then used to derive global spherical harmonic models. For a simple SH parametrization the model describes well the secular trend of the magnetic field both at satellite altitude and at the surface. As more data will be made available, longer VO-ESD time series can be derived and consequently used to

  2. The Source Equivalence Acceleration Method

    International Nuclear Information System (INIS)

    Everson, Matthew S.; Forget, Benoit

    2015-01-01

    Highlights: • We present a new acceleration method, the Source Equivalence Acceleration Method. • SEAM forms an equivalent coarse group problem for any spatial method. • Equivalence is also formed across different spatial methods and angular quadratures. • Testing is conducted using OpenMOC and performance is compared with CMFD. • Results show that SEAM is preferable for very expensive transport calculations. - Abstract: Fine-group whole-core reactor analysis remains one of the long sought goals of the reactor physics community. Such a detailed analysis is typically too computationally expensive to be realized on anything except the largest of supercomputers. Recondensation using the Discrete Generalized Multigroup (DGM) method, though, offers a relatively cheap alternative to solving the fine group transport problem. DGM, however, suffered from inconsistencies when applied to high-order spatial methods. While an exact spatial recondensation method was developed and provided full spatial consistency with the fine group problem, this approach substantially increased memory requirements for realistic problems. The method described in this paper, called the Source Equivalence Acceleration Method (SEAM), forms a coarse-group problem which preserves the fine-group problem even when using higher order spatial methods. SEAM allows recondensation to converge to the fine-group solution with minimal memory requirements and little additional overhead. This method also provides for consistency when using different spatial methods and angular quadratures between the coarse group and fine group problems. SEAM was implemented in OpenMOC, a 2D MOC code developed at MIT, and its performance tested against Coarse Mesh Finite Difference (CMFD) acceleration on the C5G7 benchmark problem and on a 361 group version of the problem. For extremely expensive transport calculations, SEAM was able to outperform CMFD, resulting in speed-ups of 20–45 relative to the normal power

  3. The numerical simulation of heat transfer during a hybrid laser-MIG welding using equivalent heat source approach

    Science.gov (United States)

    Bendaoud, Issam; Matteï, Simone; Cicala, Eugen; Tomashchuk, Iryna; Andrzejewski, Henri; Sallamand, Pierre; Mathieu, Alexandre; Bouchaud, Fréderic

    2014-03-01

    The present study is dedicated to the numerical simulation of an industrial case of hybrid laser-MIG welding of high thickness duplex steel UR2507Cu with Y-shaped chamfer geometry. It consists in simulation of heat transfer phenomena using heat equivalent source approach and implementing in finite element software COMSOL Multiphysics. A numerical exploratory designs method is used to identify the heat sources parameters in order to obtain a minimal required difference between the numerical results and the experiment which are the shape of the welded zone and the temperature evolution in different locations. The obtained results were found in good correspondence with experiment, both for melted zone shape and thermal history.

  4. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    International Nuclear Information System (INIS)

    Yao Dezhong; He Bin

    2003-01-01

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping

  5. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yao Dezhong [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu City, 610054, Sichuan Province (China); He Bin [The University of Illinois at Chicago, IL (United States)

    2003-11-07

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping.

  6. Preparation of water-equivalent radioactive solid sources

    International Nuclear Information System (INIS)

    Yamazaki, Ione M.; Koskinas, Marina F.; Dias, Mauro S.

    2011-01-01

    The development of water-equivalent solid sources in two geometries, cylindrical and flat without the need of irradiation in a strong gamma radiation source to obtain polymerization is described. These sources should have density similar to water and good uniformity. Therefore, the density and uniformity of the distribution of radioactive material in the resins were measured. The variation of these parameters in the cylindrical geometry was better than 2.0% for the density and 2.3% for the uniformity and for the flat geometry the values obtained were better than 2.0 % and better than 1.3%, respectively. These values are in good agreement with the literature. (author)

  7. Constraints on equivalent elastic source models from near-source data

    International Nuclear Information System (INIS)

    Stump, B.

    1993-01-01

    A phenomenological based seismic source model is important in quantifying the important physical processes that affect the observed seismic radiation in the linear-elastic regime. Representations such as these were used to assess yield effects on seismic waves under a Threshold Test Ban Treaty and to help transport seismic coupling experience at one test site to another. These same characterizations in a non-proliferation environment find applications in understanding the generation of the different types of body and surface waves from nuclear explosions, single chemical explosions, arrays of chemical explosions used in mining, rock bursts and earthquakes. Seismologists typically begin with an equivalent elastic representation of the source which when convolved with the propagation path effects produces a seismogram. The Representation Theorem replaces the true source with an equivalent set of body forces, boundary conditions or initial conditions. An extension of this representation shows the equivalence of the body forces, boundary conditions and initial conditions and replaces the source with a set of force moments, the first degree moment tensor for a point source representation. The difficulty with this formulation, which can completely describe the observed waveforms when the propagation path effects are known, is in the physical interpretation of the actual physical processes acting in the source volume. Observational data from within the source region, where processes are often nonlinear, linked to numerical models of the important physical processes in this region are critical to a unique physical understanding of the equivalent elastic source function

  8. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2014-01-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...... are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm...... in the CHAOS-4 and MF7 models using more conventional spherical harmonic based approaches. Advantages of the equivalent source method include its local nature, allowing e.g. for regional grid refinement, and the ease of transforming to spherical harmonics when needed. Future applications will make use of Swarm...

  9. Equivalence of two non-commutative geometry approaches

    International Nuclear Information System (INIS)

    Guo Hanying; Wu Ke; Li Jianming.

    1994-10-01

    We show that differential calculus on discrete group Z 2 is equivalent to A. Connes' approach in the case of two discrete points. They are the same theory in terms of different basis and the discrete group Z 2 is the permutation group of two discrete point. (author). 11 refs

  10. Equivalent properties for perforated plates. An analytical approach

    International Nuclear Information System (INIS)

    Cepkauskas, M.M.; Yang Jianfeng

    2005-01-01

    Structures that contain perforated plates have been a subject of interest in the Nuclear Industry. Steam generators, condensers and reactor internals utilize plates containing holes which act as flow holes or separate structures from flow by using a 'tube bank' design. The equivalent plate method has been beneficial in analyzing perforate plates. Details are found in various papers found in the bibliography. In addition the ASME code addresses perforated plates in Appendix A-8000, but is limited to a triangular hole pattern. This early work performed in this field utilized test data and analytical approaches. This paper is an examination of an analytical approach for determining equivalent plate mechanical and thermal properties. First a patch of the real plate is identified that provides a model for the necessary physical behavior of the plate. The average strain of this patch is obtained by first applying simplified one dimensional mechanical load to the patch, determining stress as a function of position, converting the stress to strain and then integrating the strain over the patch length. This average strain is then equated to the average strain of an equivalent fictitious rectangular patch. This results in obtaining equivalent Young's Modulus and Poison's Ratio for the equivalent plate in all three orthogonal directions. The corresponding equivalent shear modulus in all three directions is then determined. An orthotropic material stress strain matrix relationship is provided for the fictitious properties. By equating the real average strain with the fictitious average strain in matrix form, a stress multiplier is found to convert average fictitious stress to average real stress. This same type of process is repeated for heat conduction coefficients and coefficients of thermal expansion. Results are provided for both a square and triangular hole pattern. Reasonable results are obtained when comparing the effective Young's Modulus and Poison's Ratio with ASME

  11. Equivalence between the semiclassical and effective approaches to gravity

    International Nuclear Information System (INIS)

    Paszko, Ricardo; Accioly, Antonio

    2010-01-01

    Semiclassical and effective theories of gravitation are quite distinct from each other as far as the approximation scheme employed is concerned. In fact, while in the semiclassical approach gravity is a classical field and the particles and/or remaining fields are quantized, in the effective approach everything is quantized, including gravity, but the Feynman amplitude is expanded in terms of the momentum exchanged between the particles and/or fields. In this paper, we show that these approaches, despite being radically different, lead to equivalent results if one of the masses under consideration is much greater than all the other energies involved.

  12. Numerical fluid solutions for nonlocal electron transport in hot plasmas: Equivalent diffusion versus nonlocal source

    International Nuclear Information System (INIS)

    Colombant, Denis; Manheimer, Wallace

    2010-01-01

    Flux limitation and preheat are important processes in electron transport occurring in laser produced plasmas. The proper calculation of both of these has been a subject receiving much attention over the entire lifetime of the laser fusion project. Where nonlocal transport (instead of simple single flux limit) has been modeled, it has always been with what we denote the equivalent diffusion solution, namely treating the transport as only a diffusion process. We introduce here a new approach called the nonlocal source solution and show it is numerically viable for laser produced plasmas. It turns out that the equivalent diffusion solution generally underestimates preheat. Furthermore, the advance of the temperature front, and especially the preheat, can be held up by artificial 'thermal barriers'. The nonlocal source method of solution, on the other hand more accurately describes preheat and can stably calculate the solution for the temperature even if the heat flux is up the gradient.

  13. Practical application of equivalent linearization approaches to nonlinear piping systems

    International Nuclear Information System (INIS)

    Park, Y.J.; Hofmayer, C.H.

    1995-01-01

    The use of mechanical energy absorbers as an alternative to conventional hydraulic and mechanical snubbers for piping supports has attracted a wide interest among researchers and practitioners in the nuclear industry. The basic design concept of energy absorbers (EA) is to dissipate the vibration energy of piping systems through nonlinear hysteretic actions of EA exclamation point s under design seismic loads. Therefore, some type of nonlinear analysis needs to be performed in the seismic design of piping systems with EA supports. The equivalent linearization approach (ELA) can be a practical analysis tool for this purpose, particularly when the response approach (RSA) is also incorporated in the analysis formulations. In this paper, the following ELA/RSA methods are presented and compared to each other regarding their practice and numerical accuracy: Response approach using the square root of sum of squares (SRSS) approximation (denoted RS in this paper). Classical ELA based on modal combinations and linear random vibration theory (denoted CELA in this paper). Stochastic ELA based on direct solution of response covariance matrix (denoted SELA in this paper). New algorithms to convert response spectra to the equivalent power spectral density (PSD) functions are presented for both the above CELA and SELA methods. The numerical accuracy of the three EL are studied through a parametric error analysis. Finally, the practicality of the presented analysis is demonstrated in two application examples for piping systems with EA supports

  14. Applications of equivalent linearization approaches to nonlinear piping systems

    International Nuclear Information System (INIS)

    Park, Y.; Hofmayer, C.; Chokshi, N.

    1997-01-01

    The piping systems in nuclear power plants, even with conventional snubber supports, are highly complex nonlinear structures under severe earthquake loadings mainly due to various mechanical gaps in support structures. Some type of nonlinear analysis is necessary to accurately predict the piping responses under earthquake loadings. The application of equivalent linearization approaches (ELA) to seismic analyses of nonlinear piping systems is presented. Two types of ELA's are studied; i.e., one based on the response spectrum method and the other based on the linear random vibration theory. The test results of main steam and feedwater piping systems supported by snubbers and energy absorbers are used to evaluate the numerical accuracy and limitations

  15. Equivalence of two alternative approaches to Schroedinger equations

    International Nuclear Information System (INIS)

    Goenuel, B; Koeksal, K

    2006-01-01

    A recently developed simple approach for the exact/approximate solution of Schroedinger equations with constant/position-dependent mass, in which the potential is considered as in the perturbation theory, is shown to be equivalent to the one leading to the construction of exactly solvable potentials via the solution of second-order differential equations in terms of known special functions. The formalism in the former solves difficulties encountered in the latter in revealing the corrections explicitly to the unperturbed piece of the solutions whereas the other obviates cumbersome procedures used in the calculations of the former

  16. Dynamic determination of equivalent CT source models for personalized dosimetry

    Directory of Open Access Journals (Sweden)

    Rosendahl Stephan

    2017-09-01

    Full Text Available With improvements in CT technology, the need for reliable patient-specific dosimetry increased in the recent years. The accuracy of Monte-Carlo simulations for absolute dose estimation is related to scanner specific information on the X-ray spectra of the scanner as well as the form filter geometries and compositions. In this work a mobile measurement setup is developed, which allows both to determine the X-ray spectra and equivalent form filter of a specific scanner from just one helical scan in less than 2 minutes.

  17. Measurement of the equivalent fundamental-mode source strength

    International Nuclear Information System (INIS)

    Spriggs, G.D.; Busch, R.D.

    1997-01-01

    The steady-state multiplication, M, of a subcritical system that is in equilibrium with an external/intrinsic source is defined as the total neutron-production rate divided by the external/ intrinsic neutron source rate, S. The total neutron-production rate, in this context, is the sum of the fission-production rate plus the source rate. Because the system is in equilibrium, the total neutron-production rate is identically equal to the loss rate from the system due to absorption plus leakage. If the source S is distributed identically to the fission source distribution (i.e., angle, energy, and space), then M will be related to the effective multiplication factor of the system, k eff , as M = 1/1-k eff

  18. A bicategorical approach to Morita equivalence for Von Neumann algebras

    NARCIS (Netherlands)

    R.M. Brouwer (Rachel)

    2003-01-01

    textabstractWe relate Morita equivalence for von Neumann algebras to the ``Connes fusion'' tensor product between correspondences. In the purely algebraic setting, it is well known that rings are Morita equivalent if and only if they are equivalent objects in a bicategory whose 1-cells are

  19. A bicategorical approach to Morita equivalence for von Neumann algebras

    International Nuclear Information System (INIS)

    Brouwer, R. M.

    2003-01-01

    We relate Morita equivalence for von Neumann algebras to the ''Connes fusion'' tensor product between correspondences. In the purely algebraic setting, it is well known that rings are Morita equivalent if they are equivalent objects in a bicategory whose 1-cells are bimodules. We present a similar result for von Neumann algebras. We show that von Neumann algebras form a bicategory, having Connes's correspondences as 1-morphisms, and (bounded) intertwiners as 2-morphisms. Further, we prove that two von Neumann algebras are Morita equivalent iff they are equivalent objects in the bicategory. The proofs make extensive use of the Tomita-Takesaki modular theory

  20. An equivalent fluid/equivalent medium approach for the numerical simulation of coastal landslides propagation: theory and case studies

    OpenAIRE

    P. Mazzanti; F. Bozzano

    2009-01-01

    Coastal and subaqueous landslides can be very dangerous phenomena since they are characterised by the additional risk of induced tsunamis, unlike their completely-subaerial counterparts. Numerical modelling of landslides propagation is a key step in forecasting the consequences of landslides. In this paper, a novel approach named Equivalent Fluid/Equivalent Medium (EFEM) has been developed. It adapts common numerical models and software that were originally designed for subaerial landslides i...

  1. Equivalent properties of single event burnout induced by different sources

    International Nuclear Information System (INIS)

    Yang Shiyu; Cao Zhou; Da Daoan; Xue Yuxiong

    2009-01-01

    The experimental results of single event burnout induced by heavy ions and 252 Cf fission fragments in power MOSFET devices have been investigated. It is concluded that the characteristics of single event burnout induced by 252 Cf fission fragments is consistent to that in heavy ions. The power MOSFET in the 'turn-off' state is more susceptible to single event burnout than it is in the 'turn-on' state. The thresholds of the drain-source voltage for single event burnout induced by 173 MeV bromine ions and 252 Cf fission fragments are close to each other, and the burnout cross section is sensitive to variation of the drain-source voltage above the threshold of single event burnout. In addition, the current waveforms of single event burnouts induced by different sources are similar. Different power MOSFET devices may have different probabilities for the occurrence of single event burnout. (authors)

  2. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    Science.gov (United States)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  3. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    Science.gov (United States)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  4. On conjugate points and the Leitmann equivalent problem approach

    NARCIS (Netherlands)

    Wagener, F.O.O.

    2009-01-01

    This article extends the Leitmann equivalence method to a class of problems featuring conjugate points. The class is characterised by the requirement that the set of indifference points of a given problem forms a finite stratification.

  5. An Algebraic Approach to Knowledge Bases Informational Equivalence

    OpenAIRE

    Plotkin, B.; Plotkin, T.

    2003-01-01

    In this paper we study the notion of knowledge from the positions of universal algebra and algebraic logic. We consider first order knowledge which is based on first order logic. We define categories of knowledge and knowledge bases. These notions are defined for the fixed subject of knowledge. The key notion of informational equivalence of two knowledge bases is introduced. We use the idea of equivalence of categories in this definition. We prove that for finite models there is a clear way t...

  6. Analysis of Equivalent Circuits for Cells: A Fractional Calculus Approach

    Directory of Open Access Journals (Sweden)

    Bernal-Alvarado J.

    2012-07-01

    Full Text Available Fractional order systems are considered by many mathematicians the systems of the XXI century. The reason is that nature has proved to be best described in terms of systems composed of fractional order derivatives. This emerging area of research is slowly gaining more strength in engineering, biochemistry, medicine, biophysics, among others. This paper presents an analysis in the frequency domain equivalent of cellular systems described by equations of integer and fractional order; it also carries out an analysis in time domain in order to display the memory capacity of fractional systems. It presents the fractional differential equations equivalent models and simulations comparing integer and fractional order.

  7. Approaches to the treatment of zero equivalence in a bilingual ...

    African Journals Online (AJOL)

    Then follows a detailed discussion of lemmata expressing pragmatic meaning in the SL, lemmata with lexico-grammatical, grammatical and lexical differences between the SL and the TL as well as lemmata with a number of SL senses included under one sense in the ESD. In the ESD, the problem of zero equivalence is ...

  8. Equivalent circuit of a coaxial-line-based nozzleless microwave 915 MHz plasma source

    International Nuclear Information System (INIS)

    Miotk, R; Jasiński, M; Mizeraczyk, J

    2016-01-01

    This paper presents a new concept of an equivalent circuit of a microwave plasma source (MPS) used for gas treatment. The novelty of presented investigations is the use of the Weissfloch circuit as equivalent of an area of waveguide discontinuity in the MPS which is a result of entering a coaxial-line structure. Furthermore, in this area the microwave discharge is generated. Verification of the proposed method was carried out. The proposed equivalent circuit enabled calculating the MPS tuning characteristics and comparing them with those measured experimentally. This process allowed us to determine the impedance Z_P ofplasma in the MPS. (paper)

  9. The equivalent energy method: an engineering approach to fracture

    International Nuclear Information System (INIS)

    Witt, F.J.

    1981-01-01

    The equivalent energy method for elastic-plastic fracture evaluations was developed around 1970 for determining realistic engineering estimates for the maximum load-displacement or stress-strain conditions for fracture of flawed structures. The basis principles were summarized but the supporting experimental data, most of which were obtained after the method was proposed, have never been collated. This paper restates the original bases more explicitly and presents the validating data in graphical form. Extensive references are given. The volumetric energy ratio, a modelling parameter encompassing both size and temperature, is the fundamental parameter of the equivalent energy method. It is demonstrated that, in an engineering sense, the volumetric energy ratio is a unique material characteristic for a steel, much like a material property except size must be taken into account. With this as a proposition, the basic formula of the equivalent energy method is derived. Sufficient information is presented so that investigators and analysts may judge the viability and applicability of the method to their areas of interest. (author)

  10. The vibrational source strength descriptor using power input from equivalent forces: a simulation study

    DEFF Research Database (Denmark)

    Laugesen, Søren; Ohlrich, Mogens

    1994-01-01

    Simple, yet reliable methods for the approximate determination of the vibratory power supplied by the internal excitation forces of a given vibrational source are of great interest. One such method that relies on the application of a number of “equivalent forces” and measurements of the mean...... squared velocity on either the source or the receiving structure is studied in this paper by means of computer simulations. The study considers a simple system of two flexural beams coupled via a pair of springs. The investigation shows that a relatively small number of equivalent forces suffice...

  11. Fatigue Equivalent Stress State Approach Validation in Non-conservative Criteria: a Comparative Study

    Directory of Open Access Journals (Sweden)

    Kévin Martial Tsapi Tchoupou

    Full Text Available Abstract This paper is concerned with the fatigue prediction models for estimating the multiaxial fatigue limit. An equivalent loading approach with zero out-of-phase angles intended for fatigue limit evaluation under multiaxial loading is used. Based on experimental data found in literatures, the equivalent stress is validated in Crossland and Sines criteria and predictions compared to the predictions of existing multiaxial fatigue; results over 87 experimental items show that the equivalent stress approach is very efficient.

  12. On the equivalence of generalized least-squares approaches to the evaluation of measurement comparisons

    Science.gov (United States)

    Koo, A.; Clare, J. F.

    2012-06-01

    Analysis of CIPM international comparisons is increasingly being carried out using a model-based approach that leads naturally to a generalized least-squares (GLS) solution. While this method offers the advantages of being easier to audit and having general applicability to any form of comparison protocol, there is a lack of consensus over aspects of its implementation. Two significant results are presented that show the equivalence of three differing approaches discussed by or applied in comparisons run by Consultative Committees of the CIPM. Both results depend on a mathematical condition equivalent to the requirement that any two artefacts in the comparison are linked through a sequence of measurements of overlapping pairs of artefacts. The first result is that a GLS estimator excluding all sources of error common to all measurements of a participant is equal to the GLS estimator incorporating all sources of error, including those associated with any bias in the standards or procedures of the measuring laboratory. The second result identifies the component of uncertainty in the estimate of bias that arises from possible systematic effects in the participants' measurement standards and procedures. The expression so obtained is a generalization of an expression previously published for a one-artefact comparison with no inter-participant correlations, to one for a comparison comprising any number of repeat measurements of multiple artefacts and allowing for inter-laboratory correlations.

  13. Array of piezoelectric energy harvesting by the equivalent impedance approach

    International Nuclear Information System (INIS)

    Lien, I C; Shu, Y C

    2012-01-01

    This article proposes to use the idea of equivalent impedance to investigate the electrical response of an array of piezoelectric oscillators endowed with distinct energy harvesting circuits. Three interface electronics systems are considered including standard AC/DC and parallel/series-SSHI (synchronized switch harvesting on inductor) circuits. Various forms of equivalent load impedance are analytically obtained for different interfaces. The steady-state response of an array system is then shown to be determined by the matrix formulation of generalized Ohm’s law whose impedance matrix is explicitly expressed in terms of the load impedance. A model problem is proposed for evaluating the ability of power harvesting under various conditions. It is shown first that harvested power is increased dramatically for the case of small deviation in the system parameters. On the other hand, if the deviation in mass is relatively large, the result is changed from the power-boosting mode to wideband mode. In particular, the parallel-SSHI array system exhibits much more significant bandwidth improvement than the other two cases. Surprisingly, the series-SSHI array system shows the worst electrical response. Such an observation is opposed to our previous finding that an SSHI technique avails against the standard technique in the case based on a single piezoelectric energy harvester and the explanation is under investigation. (fast track communication)

  14. An equivalent fluid/equivalent medium approach for the numerical simulation of coastal landslides propagation: theory and case studies

    Directory of Open Access Journals (Sweden)

    P. Mazzanti

    2009-11-01

    Full Text Available Coastal and subaqueous landslides can be very dangerous phenomena since they are characterised by the additional risk of induced tsunamis, unlike their completely-subaerial counterparts. Numerical modelling of landslides propagation is a key step in forecasting the consequences of landslides. In this paper, a novel approach named Equivalent Fluid/Equivalent Medium (EFEM has been developed. It adapts common numerical models and software that were originally designed for subaerial landslides in order to simulate the propagation of combined subaerial-subaqueous and completely-subaqueous landslides. Drag and buoyancy forces, the loss of energy at the landslide-water impact and peculiar mechanisms like hydroplaning can be suitably simulated by this approach; furthermore, the change in properties of the landslide's mass, which is encountered at the transition from the subaerial to the submerged environment, can be taken into account. The approach has been tested by modelling two documented coastal landslides (a debris flow and a rock slide at Lake Albano using the DAN-W code. The results, which were achieved from the back-analyses, demonstrate the efficacy of the approach to simulate the propagation of different types of coastal landslides.

  15. An Equivalent Source Method for Modelling the Lithospheric Magnetic Field Using Satellite and Airborne Magnetic Data

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    . Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available......We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field...... for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model...

  16. Water-equivalent solid sources prepared by means of two distinct methods

    International Nuclear Information System (INIS)

    Koskinas, Marina F.; Yamazaki, Ione M.; Potiens Junior, Ademar

    2014-01-01

    The Nuclear Metrology Laboratory at IPEN is involved in developing radioactive water-equivalent solid sources prepared from an aqueous solution of acrylamide using two distinct methods for polymerization. One of them is the polymerization by high dose of 60 Co irradiation; in the other method the solid matrix-polyacrylamide is obtained from an aqueous solution composed by acrylamide, catalyzers and an aliquot of a radionuclide. The sources have been prepared in cylindrical geometry. In this paper, the study of the distribution of radioactive material in the solid sources prepared by both methods is presented. (author)

  17. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  18. Water equivalency evaluation of PRESAGE® dosimeters for dosimetry of Cs-137 and Ir-192 brachytherapy sources

    Science.gov (United States)

    Gorjiara, Tina; Hill, Robin; Kuncic, Zdenka; Baldock, Clive

    2010-11-01

    A major challenge in brachytherapy dosimetry is the measurement of steep dose gradients. This can be achieved with a high spatial resolution three dimensional (3D) dosimeter. PRESAGE® is a polyurethane based dosimeter which is suitable for 3D dosimetry. Since an ideal dosimeter is radiologically water equivalent, we have investigated the relative dose response of three different PRESAGE® formulations, two with a lower chloride and bromide content than original one, for Cs-137 and Ir-192 brachytherapy sources. Doses were calculated using the EGSnrc Monte Carlo package. Our results indicate that PRESAGE® dosimeters are suitable for relative dose measurement of Cs-137 and Ir-192 brachytherapy sources and the lower halogen content PRESAGE® dosimeters are more water equivalent than the original formulation.

  19. Energy and exergy prices of various energy sources along with their CO2 equivalents

    International Nuclear Information System (INIS)

    Caliskan, Hakan; Hepbasli, Arif

    2010-01-01

    Various types of energy sources are used in the residential and industrial sectors. Choosing the type of sources is important. When an energy source is selected, its CO 2 equivalent and energy and exergy prices must be known for a sustainable future and for establishing energy policies. These prices are based on their energy values. Exergy analysis has been recently applied to a wide range of energy-related systems. Thus, obtaining the exergy values has become more meaningful for long-term planning. In this study, energy and exergy prices of various energy sources along with CO 2 equivalents are calculated and compared for residential and industrial applications in Turkey. Energy sources considered include coal, diesel oil, electricity, fuel oil, liquid petroleum gas (LPG), natural gas, heat pumps and geothermal, and their prices were obtained over a period of 18 months, from January 2008 to June 2009. For the residential and industrial sectors, minimum energy and exergy prices were found for ground source heat pumps, while maximum energy and exergy prices belong to LPG for both sectors.

  20. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  1. Evaluation of the Inductive Coupling between Equivalent Emission Sources of Components

    Directory of Open Access Journals (Sweden)

    Moisés Ferber

    2012-01-01

    Full Text Available The electromagnetic interference between electronic systems or between their components influences the overall performance. It is important thus to model these interferences in order to optimize the position of the components of an electronic system. In this paper, a methodology to construct the equivalent model of magnetic field sources is proposed. It is based on the multipole expansion, and it represents the radiated emission of generic structures in a spherical reference frame. Experimental results for different kinds of sources are presented illustrating our method.

  2. Lexicographic Approaches to Sense Disambiguation in Monolingual Dictionaries and Equivalent Differentiation in Bilingual Dictionaries

    Directory of Open Access Journals (Sweden)

    Marjeta Vrbinc

    2011-05-01

    Full Text Available The article discusses methods of sense disambiguation in monolingual dictionaries and equivalent differentiation in bilingual dictionaries. In current dictionaries, sense disambiguation and equivalent differentiation is presented in the form of specifiers or glosses, collocators or indications of context, (domain labels, metalinguistic and encyclopaedic information. Each method is presented and illustrated by actual samples of dictionary articles taken from mono and bilingual dictionaries. The last part of the article is devoted to equivalent differentiation in bilingual decoding dictionaries. In bilingual dictionaries, equivalent differentiation is often needed to describe the lack of agreement between the source language (SL and target language (TL. The article concludes by stating that equivalent differentiation should be written in the native language of the target audience and sense indicators in a monolingual learner’s dictionary should be words that the users are most familiar with.

  3. Atmospheric polychlorinated biphenyls in Indian cities: Levels, emission sources and toxicity equivalents

    International Nuclear Information System (INIS)

    Chakraborty, Paromita; Zhang, Gan; Eckhardt, Sabine; Li, Jun; Breivik, Knut; Lam, Paul K.S.; Tanabe, Shinsuke; Jones, Kevin C.

    2013-01-01

    Atmospheric concentration of Polychlorinated biphenyls (PCBs) were measured on diurnal basis by active air sampling during Dec 2006 to Feb 2007 in seven major cities from the northern (New Delhi and Agra), eastern (Kolkata), western (Mumbai and Goa) and southern (Chennai and Bangalore) parts of India. Average concentration of Σ 25 PCBs in the Indian atmosphere was 4460 (±2200) pg/m −3 with a dominance of congeners with 4–7 chlorine atoms. Model results (HYSPLIT, FLEXPART) indicate that the source areas are likely confined to local or regional proximity. Results from the FLEXPART model show that existing emission inventories cannot explain the high concentrations observed for PCB-28. Electronic waste, ship breaking activities and dumped solid waste are attributed as the possible sources of PCBs in India. Σ 25 PCB concentrations for each city showed significant linear correlation with Toxicity equivalence (TEQ) and Neurotoxic equivalence (NEQ) values. Highlights: •Unlike decreasing trend of PCBs in United States and European countries, high levels of PCBs remain in the Indian atmosphere. •Existing emission inventories cannot explain the high PCB concentrations in Indian atmosphere. •Electronic waste recycling, ship dismantling and open burning of municipal solid waste are implicated as potential sources. -- Measurement of atmospheric Polychlorinated biphenyls in seven major Indian cities

  4. Recent equivalent source methods for quantifying airborne and structureborne sound transfer

    NARCIS (Netherlands)

    Verheij, J.W.

    1992-01-01

    Characteristically noise reduction in technical products like road vehicles, ships, aircraft and machines is complicated by the multitude of primary sources and transfer paths. Examples of well-known approaches for transfer path investigations are selective shielding, mechanical uncoupling,

  5. Convergence rates in constrained Tikhonov regularization: equivalence of projected source conditions and variational inequalities

    International Nuclear Information System (INIS)

    Flemming, Jens; Hofmann, Bernd

    2011-01-01

    In this paper, we enlighten the role of variational inequalities for obtaining convergence rates in Tikhonov regularization of nonlinear ill-posed problems with convex penalty functionals under convexity constraints in Banach spaces. Variational inequalities are able to cover solution smoothness and the structure of nonlinearity in a uniform manner, not only for unconstrained but, as we indicate, also for constrained Tikhonov regularization. In this context, we extend the concept of projected source conditions already known in Hilbert spaces to Banach spaces, and we show in the main theorem that such projected source conditions are to some extent equivalent to certain variational inequalities. The derived variational inequalities immediately yield convergence rates measured by Bregman distances

  6. An equivalent circuit approach to the modelling of the dynamics of dye sensitized solar cells

    DEFF Research Database (Denmark)

    Bay, L.; West, K.

    2005-01-01

    A model that can be used to interpret the response of a dye-sensitized photo electrode to intensity-modulated light (intensity modulated voltage spectroscopy, IMVS and intensity modulated photo-current spectroscopy, IMPS) is presented. The model is based on an equivalent circuit approach involvin...

  7. An Abstract Approach to Process Equivalence and a Coinduction Principle for Traces

    DEFF Research Database (Denmark)

    Klin, Bartek

    2004-01-01

    An abstract coalgebraic approach to well-structured relations on processes is presented, based on notions of tests and test suites. Preorders and equivalences on processes are modelled as coalgebras for behaviour endofunctors lifted to a category of test suites. The general framework is specializ...

  8. On the equivalent static loads approach for dynamic response structural optimization

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2014-01-01

    The equivalent static loads algorithm is an increasingly popular approach to solve dynamic response structural optimization problems. The algorithm is based on solving a sequence of related static response structural optimization problems with the same objective and constraint functions...... as the original problem. The optimization theoretical foundation of the algorithm is mainly developed in Park and Kang (J Optim Theory Appl 118(1):191–200, 2003). In that article it is shown, for a certain class of problems, that if the equivalent static loads algorithm terminates then the KKT conditions...

  9. Modeling and simulation of equivalent circuits in description of biological systems - a fractional calculus approach

    Directory of Open Access Journals (Sweden)

    José Francisco Gómez Aguilar

    2012-07-01

    Full Text Available Using the fractional calculus approach, we present the Laplace analysis of an equivalent electrical circuit for a multilayered system, which includes distributed elements of the Cole model type. The Bode graphs are obtained from the numerical simulation of the corresponding transfer functions using arbitrary electrical parameters in order to illustrate the methodology. A numerical Laplace transform is used with respect to the simulation of the fractional differential equations. From the results shown in the analysis, we obtain the formula for the equivalent electrical circuit of a simple spectrum, such as that generated by a real sample of blood tissue, and the corresponding Nyquist diagrams. In addition to maintaining consistency in adjusted electrical parameters, the advantage of using fractional differential equations in the study of the impedance spectra is made clear in the analysis used to determine a compact formula for the equivalent electrical circuit, which includes the Cole model and a simple RC model as special cases.

  10. Fundamental-mode sources in approach to critical experiments

    International Nuclear Information System (INIS)

    Goda, J.; Busch, R.

    2000-01-01

    An equivalent fundamental-mode source is an imaginary source that is distributed identically in space, energy, and angle to the fundamental-mode fission source. Therefore, it produces the same neutron multiplication as the fundamental-mode fission source. Even if two source distributions produce the same number of spontaneous fission neutrons, they will not necessarily contribute equally toward the multiplication of a given system. A method of comparing the relative importance of source distributions is needed. A factor, denoted as g* and defined as the ratio of the fixed-source multiplication to the fundamental-mode multiplication, is used to convert a given source strength to its equivalent fundamental-mode source strength. This factor is of interest to criticality safety as it relates to the 1/M method of approach to critical. Ideally, a plot of 1/M versus κ eff is linear. However, since 1/M = (1 minus κ eff )/g*, the plot will be linear only if g* is constant with κ eff . When g* increases with κ eff , the 1/M plot is said to be conservative because the critical mass is underestimated. However, it is possible for g* to decrease with κ eff yielding a nonconservative 1/M plot. A better understanding of g* would help predict whether a given approach to critical will be conservative or nonconservative. The equivalent fundamental-mode source strength g*S can be predicted by experiment. The experimental method was tested on the XIX-1 core on the Fast Critical Assembly at the Japan Atomic Energy Research Institute. The results showed a 30% difference between measured and calculated values. However, the XIX-1 reactor had significant intermediate-energy neutrons. The presence of intermediate-energy neutrons may have made the cross-section set used for predicted values less than ideal for the system

  11. Limitations of the toxic equivalency factor (TEF) approach for risk assessment of halogenated aromatic hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Safe, S. [Texas A and M Univ., College Station, TX (United States). Dept. of Veterinary Physiology and Pharmacology

    1995-12-31

    2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) and related halogenated aromatic hydrocarbons (HAHs) are present as complex mixtures of polychlorinated dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs) and biphenyls (PCBs) in most environmental matrices. Risk management of these mixtures utilize the toxic equivalency factor (TEF) approach in which the TCDD (dioxin) or toxic equivalents of a mixture is a summation of the congener concentration (Ci) times TEF{sub i} (potency relative to TCDD) where. TEQ{sub mixture} = {Sigma}[Cil] {times} TEF{sub i}. TEQs are determined only for those HAHs which are aryl hydrocarbon (Ah) receptor agonists and this approach assumes that the toxic or biochemical effects of individual compounds in a mixture are additive. Several in vivo and in vitro laboratory and field studies with different HAH mixtures have been utilized to validate the TEF approach. For some responses, the calculated toxicities of PCDD/PCDF and PCB mixtures predict the observed toxic potencies. However, for fetal cleft palate and immunotoxicity in mice, nonadditive (antagonistic) responses are observed using complex PCB mixtures or binary mixtures containing an Ah receptor agonist with 2,2{prime},4,4{prime},5,5{prime}-hexachlorobiphenyl (PCB153). The potential interactive effects of PCBs and other dietary Ah receptor antagonist suggest that the TEF approach for risk management of HAHs requires further refinement and should be used selectively.

  12. Analysis and synthesis of bianisotropic metasurfaces by using analytical approach based on equivalent parameters

    Science.gov (United States)

    Danaeifar, Mohammad; Granpayeh, Nosrat

    2018-03-01

    An analytical method is presented to analyze and synthesize bianisotropic metasurfaces. The equivalent parameters of metasurfaces in terms of meta-atom properties and other specifications of metasurfaces are derived. These parameters are related to electric, magnetic, and electromagnetic/magnetoelectric dipole moments of the bianisotropic media, and they can simplify the analysis of complicated and multilayer structures. A metasurface of split ring resonators is studied as an example demonstrating the proposed method. The optical properties of the meta-atom are explored, and the calculated polarizabilities are applied to find the reflection coefficient and the equivalent parameters of the metasurface. Finally, a structure consisting of two metasurfaces of the split ring resonators is provided, and the proposed analytical method is applied to derive the reflection coefficient. The validity of this analytical approach is verified by full-wave simulations which demonstrate good accuracy of the equivalent parameter method. This method can be used in the analysis and synthesis of bianisotropic metasurfaces with different materials and in different frequency ranges by considering electric, magnetic, and electromagnetic/magnetoelectric dipole moments.

  13. The approach of toxic and radiological risk equivalence in UF6 transport

    International Nuclear Information System (INIS)

    Ringot, C.; Hamard, J.

    1989-01-01

    After a brief description of the present situation concerning the safety of the transport of UF6 and the new regulation project which is being developed under the behalf of IAEA, the equivalence of radioactive and chemical risks is considered for UF6 transport regulations. The concept of low specific activity appearing misfitting to toxic gas, it is proposed a quantity limit of material, T 2 (equivalent to A 2 for radioactive materials), for packagings which do not resist to accidental conditions, (9 m drop, 800 0 C, 30 minutes fire environment). It is proposed that this limit is chosen as the release rate which is acceptable after the IAEA tests for packages having a capacity higher than T 2 kilograms. The fire being considered as the most severe situation for the toxic risk, different possible scenarios are described. This approach of risk equivalence leads to impose that the packaging resists a 800 0 C - 30 minutes fire and that in this condition the release is less than T 2 . The problem of the behaviour of the shell and the openings (in particular the valve) is raised in this context [fr

  14. Combinatorial theory of the semiclassical evaluation of transport moments. I. Equivalence with the random matrix approach

    Energy Technology Data Exchange (ETDEWEB)

    Berkolaiko, G., E-mail: berko@math.tamu.edu [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J., E-mail: Jack.Kuipers@physik.uni-regensburg.de [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)

    2013-11-15

    To study electronic transport through chaotic quantum dots, there are two main theoretical approaches. One involves substituting the quantum system with a random scattering matrix and performing appropriate ensemble averaging. The other treats the transport in the semiclassical approximation and studies correlations among sets of classical trajectories. There are established evaluation procedures within the semiclassical evaluation that, for several linear and nonlinear transport moments to which they were applied, have always resulted in the agreement with random matrix predictions. We prove that this agreement is universal: any semiclassical evaluation within the accepted procedures is equivalent to the evaluation within random matrix theory. The equivalence is shown by developing a combinatorial interpretation of the trajectory sets as ribbon graphs (maps) with certain properties and exhibiting systematic cancellations among their contributions. Remaining trajectory sets can be identified with primitive (palindromic) factorisations whose number gives the coefficients in the corresponding expansion of the moments of random matrices. The equivalence is proved for systems with and without time reversal symmetry.

  15. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test.

    Science.gov (United States)

    Kerschbamer, Rudolf

    2015-05-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.

  16. SOILD: A computer model for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil

    International Nuclear Information System (INIS)

    Chen, S.Y.; LePoire, D.; Yu, C.; Schafetz, S.; Mehta, P.

    1991-01-01

    The SOLID computer model was developed for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil. It is designed to assess external doses under various exposure scenarios that may be encountered in environmental restoration programs. The models four major functional features address (1) dose versus source depth in soil, (2) shielding of clean cover soil, (3) area of contamination, and (4) nonuniform distribution of sources. The model is also capable of adjusting doses when there are variations in soil densities for both source and cover soils. The model is supported by a data base of approximately 500 radionuclides. 4 refs

  17. 78 FR 73079 - Dividend Equivalents From Sources Within the United States

    Science.gov (United States)

    2013-12-05

    ... is of a type which does not have the potential for tax avoidance. On January 23, 2012, the Federal... regulations under section 1441 to require a withholding agent to withhold tax owed with respect to a dividend... amount of a dividend equivalent, was unduly harsh because the withholding agent remained liable for tax...

  18. Calculation of dose distribution for 252Cf fission neutron source in tissue equivalent phantoms using Monte Carlo method

    International Nuclear Information System (INIS)

    Ji Gang; Guo Yong; Luo Yisheng; Zhang Wenzhong

    2001-01-01

    Objective: To provide useful parameters for neutron radiotherapy, the author presents results of a Monte Carlo simulation study investigating the dosimetric characteristics of linear 252 Cf fission neutron sources. Methods: A 252 Cf fission source and tissue equivalent phantom were modeled. The dose of neutron and gamma radiations were calculated using Monte Carlo Code. Results: The dose of neutron and gamma at several positions for 252 Cf in the phantom made of equivalent materials to water, blood, muscle, skin, bone and lung were calculated. Conclusion: The results by Monte Carlo methods were compared with the data by measurement and references. According to the calculation, the method using water phantom to simulate local tissues such as muscle, blood and skin is reasonable for the calculation and measurements of dose distribution for 252 Cf

  19. Functional equivalency inferred from "authoritative sources" in networks of homologous proteins.

    Science.gov (United States)

    Natarajan, Shreedhar; Jakobsson, Eric

    2009-06-12

    A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods.

  20. Classic electrocardiogram-based and mobile technology derived approaches to heart rate variability are not equivalent.

    Science.gov (United States)

    Guzik, Przemyslaw; Piekos, Caroline; Pierog, Olivia; Fenech, Naiman; Krauze, Tomasz; Piskorski, Jaroslaw; Wykretowicz, Andrzej

    2018-05-01

    We compared classic ECG-derived versus a mobile approach to heart rate variability (HRV) measurement. 29 young adult healthy volunteers underwent a simultaneous recording of heart rate using an ECG and a chest heart rate monitor at supine rest, during mental stress and active standing. Mean RR interval, Standard Deviation of Normal-to-Normal (SDNN) of RR intervals, and Root Mean Square of the Successive Differences (RMSSD) between RR intervals were computed in 168 pairs of 5-minute epochs by in-house software on a PC (only sinus beats) and by mobile application "ELITEHRV" on a smartphone (no beat type identification). ECG analysis showed that 33.9% of the recordings contained at least one non-sinus beat or artefact, the mobile app did not report this. The mean RR intervals were significantly longer (p = 0.0378), while SDNN (p = 0.0001) and RMSSD (p = 0.0199) were smaller for the mobile approach. Measures of identical HRV parameters by ECG-based and mobile approaches are not equivalent. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Equivalent electrical network model approach applied to a double acting low temperature differential Stirling engine

    International Nuclear Information System (INIS)

    Formosa, Fabien; Badel, Adrien; Lottin, Jacques

    2014-01-01

    Highlights: • An equivalent electrical network modeling of Stirling engine is proposed. • This model is applied to a membrane low temperate double acting Stirling engine. • The operating conditions (self-startup and steady state behavior) are defined. • An experimental engine is presented and tested. • The model is validated against experimental results. - Abstract: This work presents a network model to simulate the periodic behavior of a double acting free piston type Stirling engine. Each component of the engine is considered independently and its equivalent electrical circuit derived. When assembled in a global electrical network, a global model of the engine is established. Its steady behavior can be obtained by the analysis of the transfer function for one phase from the piston to the expansion chamber. It is then possible to simulate the dynamic (steady state stroke and operation frequency) as well as the thermodynamic performances (output power and efficiency) for given mean pressure, heat source and heat sink temperatures. The motion amplitude especially can be determined by the spring-mass properties of the moving parts and the main nonlinear effects which are taken into account in the model. The thermodynamic features of the model have then been validated using the classical isothermal Schmidt analysis for a given stroke. A three-phase low temperature differential double acting free membrane architecture has been built and tested. The experimental results are compared with the model and a satisfactory agreement is obtained. The stroke and operating frequency are predicted with less than 2% error whereas the output power discrepancy is of about 30%. Finally, some optimization routes are suggested to improve the design and maximize the performances aiming at waste heat recovery applications

  2. A sparse equivalent source method for near-field acoustic holography

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter

    2017-01-01

    and experimental results on a classical guitar and on a highly reactive dipolelike source are presented. C-ESM is valid beyond the conventional sampling limits, making wideband reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does...

  3. A New Source Biasing Approach in ADVANTG

    International Nuclear Information System (INIS)

    Bevill, Aaron M.; Mosher, Scott W.

    2012-01-01

    The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of source points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ((bar w)). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for (bar w) in post-processing. A stratified-random sampling approach in ADVANTG is under

  4. Energetical and multiscale approaches for the definition of an equivalent stress for magneto-elastic couplings

    International Nuclear Information System (INIS)

    Hubert, Olivier; Daniel, Laurent

    2011-01-01

    A main limitation of most models describing the effect of stress on the magnetic behavior is that they are restricted to uniaxial - tensile or compressive - stress. Nevertheless, stress is multiaxial in most of industrial applications. An idea to overcome the strong limitation of models is to define a fictive uniaxial stress, the equivalent stress, that would change the magnetic behavior in a similar manner than a multiaxial stress. A first definition of equivalent stress, called the deviatoric equivalent stress, is proposed. It is based on an equivalence in magneto-elastic energy. This formulation is first derived for isotropic materials under specific assumptions. An extension to orthotropic media under disoriented magneto-mechanical loading is made. A new equivalent stress expression, called generalized equivalent stress, is then proposed. It is based on an equivalence in magnetization. Inverse identification of equivalent stress is made possible thanks to a strong simplification of the description of the material seen as an assembly of elementary magnetic domains. It is shown that this second proposal is a generalization of the deviatoric expression. Equivalent stress proposals are compared to former proposals and validated using experimental results carried out on an iron-cobalt sheet submitted to biaxial mechanical loading. These results are compared to the predictions obtained thanks to the equivalent stress formulations. The generalized equivalent stress is shown to be a tool able to foresee the magnetic behavior of a large panel of materials submitted to multiaxial stress. - Research highlights: → Classical magneto-elastic models restricted to uniaxial stress. → Stress demonstrated multiaxial in most of industrial applications. → Proposals of deviatoric and generalized equivalent stresses - multidomain modeling. → Experimental validation using iron-cobalt sheet submitted to biaxial loading. → Generalization of former proposals and modeling of

  5. A Neurocomputational Approach to Trained and Transitive Relations in Equivalence Classes

    Directory of Open Access Journals (Sweden)

    Ángel E. Tovar

    2017-10-01

    Full Text Available A stimulus class can be composed of perceptually different but functionally equivalent stimuli. The relations between the stimuli that are grouped in a class can be learned or derived from other stimulus relations. If stimulus A is equivalent to B, and B is equivalent to C, then the equivalence between A and C can be derived without explicit training. In this work we propose, with a neurocomputational model, a basic learning mechanism for the formation of equivalence. We also describe how the relatedness between the members of an equivalence class is developed for both trained and derived stimulus relations. Three classic studies on stimulus equivalence are simulated covering typical and atypical populations as well as nodal distance effects. This model shows a mechanism by which certain stimulus associations are selectively strengthened even when they are not co-presented in the environment. This model links the field of equivalence classes to accounts of Hebbian learning and categorization, and points to the pertinence of modeling stimulus equivalence to explore the effect of variations in training protocols.

  6. Quantifying undesired parallel components in Thévenin-equivalent acoustic source parameters

    DEFF Research Database (Denmark)

    Nørgaard, Kren Rahbek; Neely, Stephen T.; Rasetshwane, Daniel M.

    2018-01-01

    in the source parameters. Such parallel components can result from, e.g., a leak in the ear tip or improperly accounting for evanescent modes, and introduce errors into subsequent measurements of impedance and reflectance. This paper proposes a set of additional error metrics that are capable of detecting...

  7. Near field acoustic holography based on the equivalent source method and pressure-velocity transducers

    DEFF Research Database (Denmark)

    Zhang, Y.-B.; Chen, X.-Z.; Jacobsen, Finn

    2009-01-01

    The advantage of using the normal component of the particle velocity rather than the sound pressure in the hologram plane as the input of conventional spatial Fourier transform based near field acoustic holography (NAH) and also as the input of the statistically optimized variant of NAH has recen...... generated by sources on the two sides of the hologram plane is also examined....

  8. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2015-01-01

    it was at its lowest altitude and solar activity was quiet. All three components of the vector field data are utilized at all available latitudes. Estimates of core and large-scale magnetospheric sources are removed from the measurements using the CHAOS-4 model. Quiet-time and night-side data selection criteria...

  9. 78 FR 73128 - Dividend Equivalents From Sources Within the United States

    Science.gov (United States)

    2013-12-05

    ... a type which does not have the potential for tax avoidance'' and (2) other payments that are... tax avoidance. 2. 2012 Section 871(m) Regulations The 2012 section 871(m) regulations provided... seven-factor approach to defining a specified NPC would not accurately identify tax avoidance...

  10. Thermal neutron equivalent doses assessment around KFUPM neutron source storage area using NTDs

    Energy Technology Data Exchange (ETDEWEB)

    Abu-Jarad, F.; Fazal-ur-Rehman; Al-Haddad, M.N.; Al-Jarrallah, M.I.; Nassar, R

    2002-07-01

    Area passive neutron dosemeters based on nuclear track detectors (NTDs) have been used for 13 days to assess accumulated low doses of thermal neutrons around neutron source storage area of the King Fahd University of Petroleum and Minerals (KFUPM). Moreover, the aim of this study is to check the effectiveness of shielding of the storage area. NTDs were mounted with the boron converter on their surface as one compressed unit. The converter is a lithium tetraborate (Li{sub 2}B{sub 4}O{sub 7}) layer for thermal neutron detection via {sup 10}B(N,{alpha}){sup 7}Li and {sup 6}Li(n,{alpha}){sup 3}H nuclear reactions. The area passive dosemeters were installed on 26 different locations around the source storage area and adjacent rooms. The calibration factor for NTD-based area passive neutron dosemeters was found to be 8.3 alpha tracks.cm{sup -2}.{mu}Sv{sup -1} using active snoopy neutron dosemeters in the KFUPM neutron irradiation facility. The results show the variation of accumulated dose with locations around the storage area. The range of dose rates varied from as low as 40 nSv.h{sup -1} up to 11 {mu}Sv.h{sup -1}. The study indicates that the area passive neutron dosemeter was able to detect accumulated doses as low as 40 nSv.h{sup -1}, which could not be detected with the available active neutron dosemeters. The results of the study also indicate that an additional shielding is required to bring the dose rates down to background level. The present investigation suggests extending this study to find the contribution of doses from fast neutrons around the neutron source storage area using NTDs through proton recoil. The significance of this passive technique is that it is highly sensitive and does not require any electronics or power supplies, as is the case in active systems. (author)

  11. Determination of equivalent breast phantoms for different age groups of Taiwanese women: An experimental approach

    International Nuclear Information System (INIS)

    Dong, Shang-Lung; Chu, Tieh-Chi; Lin, Yung-Chien; Lan, Gong-Yau; Yeh, Yu-Hsiu; Chen, Sharon; Chuang, Keh-Shih

    2011-01-01

    Purpose: Polymethylmethacrylate (PMMA) slab is one of the mostly used phantoms for studying breast dosimetry in mammography. The purpose of this study was to evaluate the equivalence between exposure factors acquired from PMMA slabs and patient cases of different age groups of Taiwanese women in mammography. Methods: This study included 3910 craniocaudal screen/film mammograms on Taiwanese women acquired on one mammographic unit. The tube loading, compressed breast thickness (CBT), compression force, tube voltage, and target/filter combination for each mammogram were collected for all patients. The glandularity and the equivalent thickness of PMMA were determined for each breast using the exposure factors of the breast in combination with experimental measurements from breast-tissue-equivalent attenuation slabs. Equivalent thicknesses of PMMA to the breasts of Taiwanese women were then estimated. Results: The average ± standard deviation CBT and breast glandularity in this study were 4.2 ± 1.0 cm and 54% ± 23%, respectively. The average equivalent PMMA thickness was 4.0 ± 0.7 cm. PMMA slabs producing equivalent exposure factors as in the breasts of Taiwanese women were determined for the age groups 30-49 yr and 50-69 yr. For the 4-cm PMMA slab, the CBT and glandularity values of the equivalent breast were 4.1 cm and 65%, respectively, for the age group 30-49 yr and 4.4 cm and 44%, respectively, for the age group 50-69 yr. Conclusions: The average thickness of PMMA slabs producing the same exposure factors as observed in a large group of Taiwanese women is less than that reported for American women. The results from this study can provide useful information for determining a suitable thickness of PMMA for mammographic dose survey in Taiwan. The equivalence of PMMA slabs and the breasts of Taiwanese women is provided to allow average glandular dose assessment in clinical practice.

  12. Determination of equivalent breast phantoms for different age groups of Taiwanese women: An experimental approach

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Shang-Lung; Chu, Tieh-Chi; Lin, Yung-Chien; Lan, Gong-Yau; Yeh, Yu-Hsiu; Chen, Sharon; Chuang, Keh-Shih [Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, 101 Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan (China); Department of Radiology, Cheng Hsin General Hospital, 45 Cheng Hsin Street, Pai-Tou District, Taipei 11220, Taiwan (China); Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, 100 Shih-Chuan 1st Road, Kaohsiung 80708, Taiwan (China); Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, 101 Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan (China)

    2011-07-15

    Purpose: Polymethylmethacrylate (PMMA) slab is one of the mostly used phantoms for studying breast dosimetry in mammography. The purpose of this study was to evaluate the equivalence between exposure factors acquired from PMMA slabs and patient cases of different age groups of Taiwanese women in mammography. Methods: This study included 3910 craniocaudal screen/film mammograms on Taiwanese women acquired on one mammographic unit. The tube loading, compressed breast thickness (CBT), compression force, tube voltage, and target/filter combination for each mammogram were collected for all patients. The glandularity and the equivalent thickness of PMMA were determined for each breast using the exposure factors of the breast in combination with experimental measurements from breast-tissue-equivalent attenuation slabs. Equivalent thicknesses of PMMA to the breasts of Taiwanese women were then estimated. Results: The average {+-} standard deviation CBT and breast glandularity in this study were 4.2 {+-} 1.0 cm and 54% {+-} 23%, respectively. The average equivalent PMMA thickness was 4.0 {+-} 0.7 cm. PMMA slabs producing equivalent exposure factors as in the breasts of Taiwanese women were determined for the age groups 30-49 yr and 50-69 yr. For the 4-cm PMMA slab, the CBT and glandularity values of the equivalent breast were 4.1 cm and 65%, respectively, for the age group 30-49 yr and 4.4 cm and 44%, respectively, for the age group 50-69 yr. Conclusions: The average thickness of PMMA slabs producing the same exposure factors as observed in a large group of Taiwanese women is less than that reported for American women. The results from this study can provide useful information for determining a suitable thickness of PMMA for mammographic dose survey in Taiwan. The equivalence of PMMA slabs and the breasts of Taiwanese women is provided to allow average glandular dose assessment in clinical practice.

  13. The toxic and radiological risk equivalence approach in UF6 transport

    International Nuclear Information System (INIS)

    Ringot, C.; Hamard, J.

    1988-12-01

    After a brief description of the safety in transport of UF 6 , we discuss the equivalence of the radioactive and chemical risks in UF 6 transport regulations. As the concept of low specific activity appears to be ill-suited for a toxic gas, we propose a quantity of material limit designated T 2 (equivalent to A 2 for radioactive substances) for packagings unable to withstand accident conditions (9 m drop, 800 0 C fire environment for 30 minutes). It is proposed that this limit be chosen for the amount of release acceptable after AIEA tests. Different possible scenarios are described, with fire assumed to be the most severe toxic risk situation

  14. Assessment of fast and thermal neutron ambient dose equivalents around the KFUPM neutron source storage area using nuclear track detectors

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-ur-Rehman [Physics Department, King Fahd University of Petroleum and Minerals, Dhahran 31261 (Saudi Arabia)]. E-mail: fazalr@kfupm.edu.sa; Al-Jarallah, M.I. [Physics Department, King Fahd University of Petroleum and Minerals, Dhahran 31261 (Saudi Arabia); Abu-Jarad, F. [Radiation Protection Unit, Environmental Protection Department, Saudi Aramco, P. O. Box 13027, Dhahran 31311 (Saudi Arabia); Qureshi, M.A. [Center for Applied Physical Sciences, King Fahd University of Petroleum and Minerals, Dhahran 31261 (Saudi Arabia)

    2005-11-15

    A set of five {sup 241}Am-Be neutron sources are utilized in research and teaching at King Fahd University of Petroleum and Minerals (KFUPM). Three of these sources have an activity of 16Ci each and the other two are of 5Ci each. A well-shielded storage area was designed for these sources. The aim of the study is to check the effectiveness of shielding of the KFUPM neutron source storage area. Poly allyl diglycol carbonate (PADC) Nuclear track detectors (NTDs) based fast and thermal neutron area passive dosimeters have been utilized side by side for 33 days to assess accumulated low ambient dose equivalents of fast and thermal neutrons at 30 different locations around the source storage area and adjacent rooms. Fast neutron measurements have been carried out using bare NTDs, which register fast neutrons through recoils of protons, in the detector material. NTDs were mounted with lithium tetra borate (Li{sub 2}B{sub 4}O{sub 7}) converters on their surfaces for thermal neutron detection via B10(n,{alpha})Li6 and Li6(n,{alpha})H3 nuclear reactions. The calibration factors of NTD both for fast and thermal neutron area passive dosimeters were determined using thermoluminescent dosimeters (TLD) with and without a polyethylene moderator. The calibration factors for fast and thermal neutron area passive dosimeters were found to be 1.33 proton tracks cm{sup -2}{mu}Sv{sup -1} and 31.5 alpha tracks cm{sup -2}{mu}Sv{sup -1}, respectively. The results show variations of accumulated dose with the locations around the storage area. The fast neutron dose equivalents rates varied from as low as 182nSvh{sup -1} up to 10.4{mu}Svh{sup -1} whereas those for thermal neutron ranged from as low as 7nSvh{sup -1} up to 9.3{mu}Svh{sup -1}. The study indicates that the area passive neutron dosimeter was able to detect dose rates as low as 7 and 182nSvh{sup -1} from accumulated dose for thermal and fast neutrons, respectively, which were not possible to detect with the available active neutron

  15. Simulation study of a magnetocardiogram based on a virtual heart model: effect of a cardiac equivalent source and a volume conductor

    International Nuclear Information System (INIS)

    Shou Guo-Fa; Xia Ling; Dai Ling; Ma Ping; Tang Fa-Kuan

    2011-01-01

    In this paper, we present a magnetocardiogram (MCG) simulation study using the boundary element method (BEM) and based on the virtual heart model and the realistic human volume conductor model. The different contributions of cardiac equivalent source models and volume conductor models to the MCG are deeply and comprehensively investigated. The single dipole source model, the multiple dipoles source model and the equivalent double layer (EDL) source model are analysed and compared with the cardiac equivalent source models. Meanwhile, the effect of the volume conductor model on the MCG combined with these cardiac equivalent sources is investigated. The simulation results demonstrate that the cardiac electrophysiological information will be partly missed when only the single dipole source is taken, while the EDL source is a good option for MCG simulation and the effect of the volume conductor is smallest for the EDL source. Therefore, the EDL source is suitable for the study of MCG forward and inverse problems, and more attention should be paid to it in future MCG studies. (general)

  16. Gyrokinetic equivalence

    International Nuclear Information System (INIS)

    Parra, Felix I; Catto, Peter J

    2009-01-01

    We compare two different derivations of the gyrokinetic equation: the Hamiltonian approach in Dubin D H E et al (1983 Phys. Fluids 26 3524) and the recursive methodology in Parra F I and Catto P J (2008 Plasma Phys. Control. Fusion 50 065014). We prove that both approaches yield the same result at least to second order in a Larmor radius over macroscopic length expansion. There are subtle differences in the definitions of some of the functions that need to be taken into account to prove the equivalence.

  17. Solar system and equivalence principle constraints on f(R) gravity by the chameleon approach

    International Nuclear Information System (INIS)

    Capozziello, Salvatore; Tsujikawa, Shinji

    2008-01-01

    We study constraints on f(R) dark energy models from solar system experiments combined with experiments on the violation of the equivalence principle. When the mass of an equivalent scalar field degree of freedom is heavy in a region with high density, a spherically symmetric body has a thin shell so that an effective coupling of the fifth force is suppressed through a chameleon mechanism. We place experimental bounds on the cosmologically viable models recently proposed in the literature that have an asymptotic form f(R)=R-λR c [1-(R c /R) 2n ] in the regime R>>R c . From the solar system constraints on the post-Newtonian parameter γ, we derive the bound n>0.5, whereas the constraints from the violations of the weak and strong equivalence principles give the bound n>0.9. This allows a possibility to find the deviation from the Λ-cold dark matter (ΛCDM) cosmological model. For the model f(R)=R-λR c (R/R c ) p with 0 -10 , which shows that this model is hardly distinguishable from the ΛCDM cosmology

  18. Equivalence between the real-time Feynman histories and the quantum-shutter approaches for the 'passage time' in tunneling

    International Nuclear Information System (INIS)

    Garcia-Calderon, Gaston; Villavicencio, Jorge; Yamada, Norifumi

    2003-01-01

    We show the equivalence of the functions G p (t) and vertical bar Ψ(d,t) vertical bar 2 for the 'passage time' in tunneling. The former, obtained within the framework of the real-time Feynman histories approach to the tunneling time problem, uses the Gell-Mann and Hartle's decoherence functional, and the latter involves an exact analytical solution to the time-dependent Schroedinger equation for cutoff initial waves

  19. Equivalence of the spherical and deformed shell-model approach to intruder states

    International Nuclear Information System (INIS)

    Heyde, K.; Coster, C. de; Ryckebusch, J.; Waroquier, M.

    1989-01-01

    We point out that the description of intruder states, incorporating particle-hole (p-h) excitation across a closed shell in the spherical shell model or a description starting from the Nilsson model are equivalent. We furthermore indicate that the major part of the nucleon-nucleon interaction, responsible for the low excitation energy of intruder states comes as a two-body proton-neutron quadrupole interaction in the spherical shell model. In the deformed shell model, quadrupole binding energy is gained mainly through the one-body part of the potential. (orig.)

  20. Nonintersecting string model and graphical approach: equivalence with a Potts model

    International Nuclear Information System (INIS)

    Perk, J.H.H.; Wu, F.Y.

    1986-01-01

    Using a graphical method the authors establish the exact equivalence of the partition function of a q-state nonintersecting string (NIS) model on an arbitrary planar, even-valenced lattice with that of a q 2 -state Potts model on a relaxed lattice. The NIS model considered in this paper is one in which the vertex weights are expressible as sums of those of basic vertex types, and the resulting Potts model generally has multispin interactions. For the square and Kagome lattices this leads to the equivalence of a staggered NIS model with Potts models with anisotropic pair interactions, indicating that these NIS models have a first-order transition for q greater than 2. For the triangular lattice the NIS model turns out to be the five-vertex model of Wu and Lin and it relates to a Potts model with two- and three-site interactions. The most general model the authors discuss is an oriented NIS model which contains the six-vertex model and the NIS models of Stroganov and Schultz as special cases

  1. Equivalence among three alternative approaches to estimating live tree carbon stocks in the eastern United States

    Science.gov (United States)

    Coeli M. Hoover; James E. Smith

    2017-01-01

    Assessments of forest carbon are available via multiple alternate tools or applications and are in use to address various regulatory and reporting requirements. The various approaches to making such estimates may or may not be entirely comparable. Knowing how the estimates produced by some commonly used approaches vary across forest types and regions allows users of...

  2. On the equivalence of two approaches in the exciton-polariton theory

    International Nuclear Information System (INIS)

    Ha Vinh Tan; Nguyen Toan Thang

    1983-02-01

    The polariton effect in the optical processes involving photons with energies near that of an exciton is investigated by the Bogolubov diagonalization and the Green function approaches in a simple model of the direct band gap semiconductor with the electrical dipole allowed transition. To take into account the non-resonant terms of the interaction Hamiltonian of the photon-exciton system the Green function approach derived by Nguyen Van Hieu is presented with the use of Green's function matrix technique analogous to that suggested by Nambu in the theory of superconductivity. It is shown that with the suitable choice of the phase factors the renormalization constants are equal to the diagonalization coefficients. The disperson of polaritons and the matrix elements of processes with the participation of polaritons are identically calculated by both methods. However the Green function approach has an advantage in including the damping effect of polaritons. (author)

  3. An equivalent frequency approach for determining non-linear effects on pre-tensioned-cable cross-braced structures

    Science.gov (United States)

    Giaccu, Gian Felice

    2018-05-01

    Pre-tensioned cable braces are widely used as bracing systems in various structural typologies. This technology is fundamentally utilized for stiffening purposes in the case of steel and timber structures. The pre-stressing force imparted to the braces provides to the system a remarkable increment of stiffness. On the other hand, the pre-tensioning force in the braces must be properly calibrated in order to satisfactorily meet both serviceability and ultimate limit states. Dynamic properties of these systems are however affected by non-linear behavior due to potential slackening of the pre-tensioned brace. In the recent years the author has been working on a similar problem regarding the non-linear response of cables in cable-stayed bridges and braced structures. In the present paper a displacement-based approach is used to examine the non-linear behavior of a building system. The methodology operates through linearization and allows obtaining an equivalent linearized frequency to approximately characterize, mode by mode, the dynamic behavior of the system. The equivalent frequency depends on both the mechanical characteristics of the system, the pre-tensioning level assigned to the braces and a characteristic vibration amplitude. The proposed approach can be used as a simplified technique, capable of linearizing the response of structural systems, characterized by non-linearity induced by the slackening of pre-tensioned braces.

  4. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult

  5. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  6. Innovative approach toward new generation sources

    International Nuclear Information System (INIS)

    Watanabe, N.

    2001-01-01

    The world neutron community needs more neutrons and more opportunities at a much less expense. A worldwide neutron net work proposed here would be a future dream of the community. A neutron source being able to satisfy such requirements is the innovative neutron source. A new FFAG synchrotron will be the best candidate to realize such a network consisting of various spallation sources ranging from kW to MW in beam power. There would be many advantages with this accelerator. The next are the target issues: how to accept a higher beam-power beyond 5 MW. Some thoughts are discussed here. Various moderators are discussed in connection with the requirements from the instruments proposed for JSNS, mainly focussed on the performance and utilization of a coupled hydrogen moderator with optimized premoderator, aiming at more efficient use of neutrons. A new idea for pulse shaping, 'mechanical poisoning' is proposed. At an existing spallation source the number of instruments is much smaller than at a reactor. In order to install as many instruments as possible, the beam extraction and branching methods become very important. However, even at a reactor, where mainly monochromatic neutrons are used, the neutron-intensity losses due to beam multiplexing uses are significant. This problem becomes more serious in case of a pulsed source, where in many cases polychromatic beams are required. This issue is also discussed. (author)

  7. Innovative approach toward new generation sources

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, N. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-03-01

    The world neutron community needs more neutrons and more opportunities at a much less expense. A worldwide neutron net work proposed here would be a future dream of the community. A neutron source being able to satisfy such requirements is the innovative neutron source. A new FFAG synchrotron will be the best candidate to realize such a network consisting of various spallation sources ranging from kW to MW in beam power. There would be many advantages with this accelerator. The next are the target issues: how to accept a higher beam-power beyond 5 MW. Some thoughts are discussed here. Various moderators are discussed in connection with the requirements from the instruments proposed for JSNS, mainly focussed on the performance and utilization of a coupled hydrogen moderator with optimized premoderator, aiming at more efficient use of neutrons. A new idea for pulse shaping, 'mechanical poisoning' is proposed. At an existing spallation source the number of instruments is much smaller than at a reactor. In order to install as many instruments as possible, the beam extraction and branching methods become very important. However, even at a reactor, where mainly monochromatic neutrons are used, the neutron-intensity losses due to beam multiplexing uses are significant. This problem becomes more serious in case of a pulsed source, where in many cases polychromatic beams are required. This issue is also discussed. (author)

  8. Open Source Approach to Project Management Tools

    Directory of Open Access Journals (Sweden)

    Romeo MARGEA

    2011-01-01

    Full Text Available Managing large projects involving different groups of people and complex tasks can be challenging. The solution is to use Project management software, which allows a more efficient management of projects. However, famous project management systems can be costly and may require expensive custom servers. Even if free software is not as complex as Microsoft Project, is noteworthy to think that not all projects need all the features, amenities and power of such systems. There are free and open source software alternatives that meet the needs of most projects, and that allow Web access based on different platforms and locations. A starting stage in adopting an OSS in-house is finding and identifying existing open source solution. In this paper we present an overview of Open Source Project Management Software (OSPMS based on articles, reviews, books and developers’ web sites, about those that seem to be the most popular software in this category.

  9. Super-Positioning of Voltage Sources for Fast Assessment of Wide-Area Thévenin Equivalents

    DEFF Research Database (Denmark)

    Møller, Jakob Glarbo; Jóhannsson, Hjörtur; Østergaard, Jacob

    2017-01-01

    and parallelized for shared memory multiprocessing. The proposed algorithm is tested on a collection of large test systems and performance is found to be significantly better than the reference method. The algorithm will thereby facilitate a speed-up of methods relying on Thévenin equivalent representation...

  10. Evaluation of Lithium-Ion Battery Equivalent Circuit Models for State of Charge Estimation by an Experimental Approach

    Directory of Open Access Journals (Sweden)

    Jinxin Fan

    2011-03-01

    Full Text Available To improve the use of lithium-ion batteries in electric vehicle (EV applications, evaluations and comparisons of different equivalent circuit models are presented in this paper. Based on an analysis of the traditional lithium-ion battery equivalent circuit models such as the Rint, RC, Thevenin and PNGV models, an improved Thevenin model, named dual polarization (DP model, is put forward by adding an extra RC to simulate the electrochemical polarization and concentration polarization separately. The model parameters are identified with a genetic algorithm, which is used to find the optimal time constant of the model, and the experimental data from a Hybrid Pulse Power Characterization (HPPC test on a LiMn2O4 battery module. Evaluations on the five models are carried out from the point of view of the dynamic performance and the state of charge (SoC estimation. The dynamic performances of the five models are obtained by conducting the Dynamic Stress Test (DST and the accuracy of SoC estimation with the Robust Extended Kalman Filter (REKF approach is determined by performing a Federal Urban Driving Schedules (FUDS experiment. By comparison, the DP model has the best dynamic performance and provides the most accurate SoC estimation. Finally, sensitivity of the different SoC initial values is investigated based on the accuracy of SoC estimation with the REKF approach based on the DP model. It is clear that the errors resulting from the SoC initial value are significantly reduced and the true SoC is convergent within an acceptable error.

  11. OPEN SOURCE APPROACH TO URBAN GROWTH SIMULATION

    Directory of Open Access Journals (Sweden)

    A. Petrasova

    2016-06-01

    Full Text Available Spatial patterns of land use change due to urbanization and its impact on the landscape are the subject of ongoing research. Urban growth scenario simulation is a powerful tool for exploring these impacts and empowering planners to make informed decisions. We present FUTURES (FUTure Urban – Regional Environment Simulation – a patch-based, stochastic, multi-level land change modeling framework as a case showing how what was once a closed and inaccessible model benefited from integration with open source GIS.We will describe our motivation for releasing this project as open source and the advantages of integrating it with GRASS GIS, a free, libre and open source GIS and research platform for the geospatial domain. GRASS GIS provides efficient libraries for FUTURES model development as well as standard GIS tools and graphical user interface for model users. Releasing FUTURES as a GRASS GIS add-on simplifies the distribution of FUTURES across all main operating systems and ensures the maintainability of our project in the future. We will describe FUTURES integration into GRASS GIS and demonstrate its usage on a case study in Asheville, North Carolina. The developed dataset and tutorial for this case study enable researchers to experiment with the model, explore its potential or even modify the model for their applications.

  12. Direct lateral approach to lumbar fusion is a biomechanically equivalent alternative to the anterior approach: an in vitro study.

    Science.gov (United States)

    Laws, Cory J; Coughlin, Dezba G; Lotz, Jeffrey C; Serhan, Hassan A; Hu, Serena S

    2012-05-01

    A human cadaveric biomechanical study of lumbar mobility before and after fusion and with or without supplemental instrumentation for 5 instrumentation configurations. To determine the biomechanical differences between anterior lumbar interbody fusion (ALIF) and direct lateral interbody fusion (DLIF) with and without supplementary instrumentation. Some prior studies have compared various surgical approaches using the same interbody device whereas others have investigated the stabilizing effect of supplemental instrumentation. No published studies have performed a side-by-side comparison of standard and minimally invasive techniques with and without supplemental instrumentation. Eight human lumbosacral specimens (16 motion segments) were tested in each of the 5 following configurations: (1) intact, (2) with ALIF or DLIF cage, (3) with cage plus stabilizing plate, (4) with cage plus unilateral pedicle screw fixation (PSF), and (5) with cage plus bilateral PSF. Pure moments were applied to induce specimen flexion, extension, lateral bending, and axial rotation. Three-dimensional kinematic responses were measured and used to calculate range of motion, stiffness, and neutral zone. Compared to the intact state, DLIF significantly reduced range of motion in flexion, extension, and lateral bending (P = 0.0117, P = 0.0015, P = 0.0031). Supplemental instrumentation significantly increased fused-specimen stiffness for both DLIF and ALIF groups. For the ALIF group, bilateral PSF increased stiffness relative to stand-alone cage by 455% in flexion and 317% in lateral bending (P = 0.0009 and P < 0.0001). The plate increased ALIF group stiffness by 211% in extension and 256% in axial rotation (P = 0.0467 and P = 0.0303). For the DLIF group, bilateral PSF increased stiffness by 350% in flexion and 222% in extension (P < 0.0001 and P = 0.0008). No differences were observed between ALIF and DLIF groups supplemented with bilateral PSF. Our data support that the direct lateral approach

  13. Software development an open source approach

    CERN Document Server

    Tucker, Allen; de Silva, Chamindra

    2011-01-01

    Overview and Motivation Software Free and Open Source Software (FOSS)Two Case Studies Working with a Project Team Key FOSS Activities Client-Oriented vs. Community-Oriented Projects Working on a Client-Oriented Project Joining a Community-Oriented Project Using Project Tools Collaboration Tools Code Management Tools Run-Time System ConstraintsSoftware Architecture Architectural Patterns Layers, Cohesion, and Coupling Security Concurrency, Race Conditions, and DeadlocksWorking with Code Bad Smells and Metrics Refactoring Testing Debugging Extending the Software for a New ProjectDeveloping the D

  14. A numerical approach for assessing effects of shear on equivalent permeability and nonlinear flow characteristics of 2-D fracture networks

    Science.gov (United States)

    Liu, Richeng; Li, Bo; Jiang, Yujing; Yu, Liyuan

    2018-01-01

    Hydro-mechanical properties of rock fractures are core issues for many geoscience and geo-engineering practices. Previous experimental and numerical studies have revealed that shear processes could greatly enhance the permeability of single rock fractures, yet the shear effects on hydraulic properties of fractured rock masses have received little attention. In most previous fracture network models, single fractures are typically presumed to be formed by parallel plates and flow is presumed to obey the cubic law. However, related studies have suggested that the parallel plate model cannot realistically represent the surface characters of natural rock fractures, and the relationship between flow rate and pressure drop will no longer be linear at sufficiently large Reynolds numbers. In the present study, a numerical approach was established to assess the effects of shear on the hydraulic properties of 2-D discrete fracture networks (DFNs) in both linear and nonlinear regimes. DFNs considering fracture surface roughness and variation of aperture in space were generated using an originally developed code DFNGEN. Numerical simulations by solving Navier-Stokes equations were performed to simulate the fluid flow through these DFNs. A fracture that cuts through each model was sheared and by varying the shear and normal displacements, effects of shear on equivalent permeability and nonlinear flow characteristics of DFNs were estimated. The results show that the critical condition of quantifying the transition from a linear flow regime to a nonlinear flow regime is: 10-4 〈 J hydraulic gradient. When the fluid flow is in a linear regime (i.e., J reduce the equivalent permeability significantly in the orientation perpendicular to the sheared fracture as much as 53.86% when J = 1, shear displacement Ds = 7 mm, and normal displacement Dn = 1 mm. By fitting the calculated results, the mathematical expression for δ2 is established to help choose proper governing equations when

  15. Fatigue life evaluation of 42CrMo4 nitrided steel by local approach: Equivalent strain-life-time

    International Nuclear Information System (INIS)

    Terres, Mohamed Ali; Sidhom, Habib

    2012-01-01

    Highlights: → Ion nitriding treatment of 42CrMo4 steel improves their fatigue strength by 32% as compared with the untreated state. → This improvement is the result of the beneficial effects of the superficial work- hardening and of the stabilized compressive residual stress. → The notch region is found to be the fatigue crack nucleation site resulting from a stress concentration (Kt = 1.6). → The local equivalent strain-fatigue life method was found to be an interesting predictive fatigue life method for nitrided parts. -- Abstract: In this paper, the fatigue resistance of 42CrMo4 steel in his untreated and nitrided state was evaluated, using both experimental and numerical approaches. The experimental assessment was conducted using three points fatigue flexion tests on notched specimens at R = 0.1. Microstructure analysis, micro-Vickers hardness test, and scanning electron microscope observation were carried out for evaluating experiments. In results, the fatigue cracks of nitrided specimens were initiated at the surface. The fatigue life of nitrided specimens was prolonged compared to that of the untreated. The numerical method used in this study to predict the nucleation fatigue life was developed on the basis of a local approach, which took into account the applied stresses and stabilized residual stresses during the cyclic loading and the low cyclic fatigue characteristics. The propagation fatigue life was calculated using fracture mechanics concepts. It was found that the numerical results were well correlated with the experimental ones.

  16. Equivalent Lagrangians

    International Nuclear Information System (INIS)

    Hojman, S.

    1982-01-01

    We present a review of the inverse problem of the Calculus of Variations, emphasizing the ambiguities which appear due to the existence of equivalent Lagrangians for a given classical system. In particular, we analyze the properties of equivalent Lagrangians in the multidimensional case, we study the conditions for the existence of a variational principle for (second as well as first order) equations of motion and their solutions, we consider the inverse problem of the Calculus of Variations for singular systems, we state the ambiguities which emerge in the relationship between symmetries and conserved quantities in the case of equivalent Lagrangians, we discuss the problems which appear in trying to quantize classical systems which have different equivalent Lagrangians, we describe the situation which arises in the study of equivalent Lagrangians in field theory and finally, we present some unsolved problems and discussion topics related to the content of this article. (author)

  17. Estimation of low-level neutron dose-equivalent rate by using extrapolation method for a curie level Am–Be neutron source

    International Nuclear Information System (INIS)

    Li, Gang; Xu, Jiayun; Zhang, Jie

    2015-01-01

    Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am–Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am–Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. - Highlights: • The scope of the affected area for

  18. A physical based equivalent circuit modeling approach for ballasted InP DHBT multi-finger devices at millimeter-wave frequencies

    DEFF Research Database (Denmark)

    Midili, Virginio; Squartecchia, Michele; Johansen, Tom Keinicke

    2016-01-01

    equivalent circuit description. In the first approach, the EM simulations of contact pads and ballasting network are combined with the small-signal model of the intrinsic device. In the second approach, the ballasting network is modeled with lumped components derived from physical analysis of the layout...

  19. Generation of equivalent forms of operational trans-conductance amplifier-RC sinusoidal oscillators: the nullor approach

    Directory of Open Access Journals (Sweden)

    Raj Senani

    2014-06-01

    Full Text Available It has been shown in two earlier papers published from this study that corresponding to a given single-operational trans-conductance amplifier (single-OTA-RC and dual-OTA-RC sinusoidal oscillators, there are three other structurally distinct equivalent forms having the same characteristic equation, one of which employs both grounded capacitors (GC. In this study, an earlier nullor-based theory of generating equivalent op-amp oscillator circuits, proposed by the first author, is extended to derive equivalent OTA-RC circuits which discloses the existence of an additional number of equivalent forms for the same given OTA-RC oscillators than those predicted by the quoted earlier works, and thereby considerably enlarging the set of equivalents of a given OTA-RC oscillator. Furthermore, the presented nullor-based theory of generating equivalent OTA-RC oscillators results in three additional interesting outcomes: (i the revelation that corresponding to any given OTA-RC oscillator there are two ‘both-GC’ oscillators (and not merely one, as derived in the quoted earlier works; (ii the availability of explicit current outputs in several of the derived equivalents and (iii the realisability explicit-current-output ‘quadrature oscillators’ in some of the generated equivalent oscillators. The workability of the generated equivalent OTA-RC oscillators has been verified by SPICE simulations, based on CMOS OTAs using 0.18 µm CMOS technology process parameters, and some sample results are given.

  20. The ambient dose equivalent at flight altitudes: a fit to a large set of data using a Bayesian approach

    International Nuclear Information System (INIS)

    Wissmann, F; Reginatto, M; Moeller, T

    2010-01-01

    The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.

  1. Sources of endocrine-disrupting compounds in North Carolina waterways: a geographic information systems approach

    Science.gov (United States)

    Sackett, Dana K.; Pow, Crystal Lee; Rubino, Matthew J.; Aday, D.D.; Cope, W. Gregory; Kullman, Seth W.; Rice, J.A.; Kwak, Thomas J.; Law, L.M.

    2015-01-01

    The presence of endocrine-disrupting compounds (EDCs), particularly estrogenic compounds, in the environment has drawn public attention across the globe, yet a clear understanding of the extent and distribution of estrogenic EDCs in surface waters and their relationship to potential sources is lacking. The objective of the present study was to identify and examine the potential input of estrogenic EDC sources in North Carolina water bodies using a geographic information system (GIS) mapping and analysis approach. Existing data from state and federal agencies were used to create point and nonpoint source maps depicting the cumulative contribution of potential sources of estrogenic EDCs to North Carolina surface waters. Water was collected from 33 sites (12 associated with potential point sources, 12 associated with potential nonpoint sources, and 9 reference), to validate the predictive results of the GIS analysis. Estrogenicity (measured as 17β-estradiol equivalence) ranged from 0.06 ng/L to 56.9 ng/L. However, the majority of sites (88%) had water 17β-estradiol concentrations below 1 ng/L. Sites associated with point and nonpoint sources had significantly higher 17β-estradiol levels than reference sites. The results suggested that water 17β-estradiol was reflective of GIS predictions, confirming the relevance of landscape-level influences on water quality and validating the GIS approach to characterize such relationships.

  2. Salmonella Source Attribution in Japan by a Microbiological Subtyping Approach

    DEFF Research Database (Denmark)

    Toyofuku, Hajime; Pires, Sara Monteiro; Hald, Tine

    2011-01-01

    In order to estimate the number of human Salmonella infections attributable to each of major animal-food source, and help identifying the best Salmonella intervention strategies, a microbial subtyping approach for source attribution was applied. We adapted a Bayesian model that attributes illnesses......-food sources, subtype-related factors, and source-related factors. National-surveillance serotyping data from 1998 to 2007 were applied to the model. Results suggested that the relative contribution of the sources to salmonellosis varied during the 10 year period, and that eggs are the most important source...... to specific sources and allows for the estimation of the differences in the ability of Salmonella subtypes and food types to result in reported salmonellosis. The number of human cases caused by different Salmonella subtypes is estimated as a function of the prevalence of these subtypes in the animal...

  3. Open source communities: an integrally informed approach to organizational transformation

    NARCIS (Netherlands)

    Millar-Schijf, Carla C.J.M.; Choi, C.J.; Russell, E.T.; Kim, J.-B.

    2005-01-01

    Purpose - To reframe analysis of the open source software (OSS) phenomenon from an AQAL perspectiveDesign/methodology/approach - The approach is a review of current research thinking and application of the AQAL framework to suggest resolution of polarizations.Findings - The authors find that AQAL is

  4. Improving gridded snow water equivalent products in British Columbia, Canada: multi-source data fusion by neural network models

    Science.gov (United States)

    Snauffer, Andrew M.; Hsieh, William W.; Cannon, Alex J.; Schnorbus, Markus A.

    2018-03-01

    Estimates of surface snow water equivalent (SWE) in mixed alpine environments with seasonal melts are particularly difficult in areas of high vegetation density, topographic relief, and snow accumulations. These three confounding factors dominate much of the province of British Columbia (BC), Canada. An artificial neural network (ANN) was created using as predictors six gridded SWE products previously evaluated for BC. Relevant spatiotemporal covariates were also included as predictors, and observations from manual snow surveys at stations located throughout BC were used as target data. Mean absolute errors (MAEs) and interannual correlations for April surveys were found using cross-validation. The ANN using the three best-performing SWE products (ANN3) had the lowest mean station MAE across the province. ANN3 outperformed each product as well as product means and multiple linear regression (MLR) models in all of BC's five physiographic regions except for the BC Plains. Subsequent comparisons with predictions generated by the Variable Infiltration Capacity (VIC) hydrologic model found ANN3 to better estimate SWE over the VIC domain and within most regions. The superior performance of ANN3 over the individual products, product means, MLR, and VIC was found to be statistically significant across the province.

  5. An alternative subspace approach to EEG dipole source localization

    Science.gov (United States)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  6. An alternative subspace approach to EEG dipole source localization

    International Nuclear Information System (INIS)

    Xu Xiaoliang; Xu, Bobby; He Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist

  7. Source selection for analogical reasoning an empirical approach

    Energy Technology Data Exchange (ETDEWEB)

    Stubblefield, W.A. [Sandia National Labs., Albuquerque, NM (United States); Luger, G.F. [Univ. of New Mexico, Albuquerque, NM (United States)

    1996-12-31

    The effectiveness of an analogical reasoner depends upon its ability to select a relevant analogical source. In many problem domains, however, too little is known about target problems to support effective source selection. This paper describes the design and evaluation of SCAVENGER, an analogical reasoner that applies two techniques to this problem: (1) An assumption-based approach to matching that allows properties of candidate sources to match unknown target properties in the absence of evidence to the contrary. (2) The use of empirical learning to improve memory organization based on problem solving experience.

  8. Effective source approach to self-force calculations

    International Nuclear Information System (INIS)

    Vega, Ian; Wardell, Barry; Diener, Peter

    2011-01-01

    Numerical evaluation of the self-force on a point particle is made difficult by the use of delta functions as sources. Recent methods for self-force calculations avoid delta functions altogether, using instead a finite and extended 'effective source' for a point particle. We provide a review of the general principles underlying this strategy, using the specific example of a scalar point charge moving in a black hole spacetime. We also report on two new developments: (i) the construction and evaluation of an effective source for a scalar charge moving along a generic orbit of an arbitrary spacetime, and (ii) the successful implementation of hyperboloidal slicing that significantly improves on previous treatments of boundary conditions used for effective-source-based self-force calculations. Finally, we identify some of the key issues related to the effective source approach that will need to be addressed by future work.

  9. A Multimode Equivalent Network Approach for the Analysis of a 'Realistic' Finite Array of Open Ended Waveguides

    NARCIS (Netherlands)

    Neto, A.; Bolt, R.; Gerini, G.; Schmitt, D.

    2003-01-01

    In this contribution we present a theoretical model for the analysis of finite arrays of open-ended waveguides mounted on finite mounting platforms or having radome coverages. This model is based on a Multimode Equivalent Network (MEN) [1] representation of the radiating waveguides complete with

  10. [The ethical reflection approach, a source of wellbeing at work].

    Science.gov (United States)

    Bréhaux, Karine; Grésyk, Bénédicte

    2014-01-01

    Clinical nursing practice, beyond its application to care procedures, can be expressed in terms of ethical added value in the support of patients. In Reims university hospital, where a clinical ethics and care think-tank was created in June 2010, the ethical reflection approach is encouraged in order to reemphasise the global meaning of care as a source of wellbeing at work.

  11. An evolution of image source camera attribution approaches.

    Science.gov (United States)

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  12. A Note on the Equivalence between the Normal and the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Grunspan, Cyril

    2011-01-01

    First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.

  13. The biologically equivalent dose BED - Is the approach for calculation of this factor really a reliable basis?

    International Nuclear Information System (INIS)

    Jensen, J.M.; Zimmermann, J.

    2000-01-01

    To predict the effect on tumours in radiotherapy, especially relating to irreversible effects, but also to realize the retrospective assessment the so called L-Q-model is relied on at present. Internal specific organ parameters, such as α, β, γ, T p , T k , and ρ, as well as external parameters, so as D, d, n, V, and V ref , were used for determination of the biologically equivalent dose BED. While the external parameters are determinable with small deviations, the internal parameters depend on biological varieties and dispersons: In some cases the lowest value is assumed to be Δ=±25%. This margin of error goes on to the biologically equivalent dose by means of the principle of superposition of errors. In some selected cases (lung, kidney, skin, rectum) these margins of error were calculated exemplarily. The input errors especially of the internal parameters cause a mean error Δ on the biologically equivalent dose and a dispersion of the single fraction dose d dependent on the organ taking into consideration, of approximately 8-30%. Hence it follows only a very critical and cautious application of those L-Q-algorithms in expert proceedings, and in radiotherapy more experienced based decisions are recommended, instead of acting only upon simple two-dimensional mechanistic ideas. (orig.) [de

  14. Multiple approaches to microbial source tracking in tropical northern Australia

    KAUST Repository

    Neave, Matthew

    2014-09-16

    Microbial source tracking is an area of research in which multiple approaches are used to identify the sources of elevated bacterial concentrations in recreational lakes and beaches. At our study location in Darwin, northern Australia, water quality in the harbor is generally good, however dry-season beach closures due to elevated Escherichia coli and enterococci counts are a cause for concern. The sources of these high bacteria counts are currently unknown. To address this, we sampled sewage outfalls, other potential inputs, such as urban rivers and drains, and surrounding beaches, and used genetic fingerprints from E. coli and enterococci communities, fecal markers and 454 pyrosequencing to track contamination sources. A sewage effluent outfall (Larrakeyah discharge) was a source of bacteria, including fecal bacteria that impacted nearby beaches. Two other treated effluent discharges did not appear to influence sites other than those directly adjacent. Several beaches contained fecal indicator bacteria that likely originated from urban rivers and creeks within the catchment. Generally, connectivity between the sites was observed within distinct geographical locations and it appeared that most of the bacterial contamination on Darwin beaches was confined to local sources.

  15. New recommendations for dose equivalent

    International Nuclear Information System (INIS)

    Bengtsson, G.

    1985-01-01

    In its report 39, the International Commission on Radiation Units and Measurements (ICRU), has defined four new quantities for the determination of dose equivalents from external sources: the ambient dose equivalent, the directional dose equivalent, the individual dose equivalent, penetrating and the individual dose equivalent, superficial. The rationale behind these concepts and their practical application are discussed. Reference is made to numerical values of these quantities which will be the subject of a coming publication from the International Commission on Radiological Protection, ICRP. (Author)

  16. A Stigmergy Approach for Open Source Software Developer Community Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Beaver, Justin M [ORNL; Potok, Thomas E [ORNL; Pullum, Laura L [ORNL; Treadwell, Jim N [ORNL

    2009-01-01

    The stigmergy collaboration approach provides a hypothesized explanation about how online groups work together. In this research, we presented a stigmergy approach for building an agent based open source software (OSS) developer community collaboration simulation. We used group of actors who collaborate on OSS projects as our frame of reference and investigated how the choices actors make in contribution their work on the projects determinate the global status of the whole OSS projects. In our simulation, the forum posts and project codes served as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing developer agent behaviors selection probability.

  17. The advanced neutron source safety approach and plans

    International Nuclear Information System (INIS)

    Harrington, R.M.

    1989-01-01

    The Advanced Neutron Source (ANS) is a user facility for all areas of neutron research proposed for construction at the Oak Ridge National Laboratory. The neutron source is planned to be a 350-MW research reactor. The reactor, currently in conceptual design, will belong to the United States Department of Energy (USDOE). The safety approach and planned elements of the safety program for the ANS are described. The safety approach is to incorporate USDOE requirements [which, by reference, include appropriate requirements from the United States Nuclear Regulatory Commission (USNRC) and other national and state regulatory agencies] into the design, and to utilize probabilistic risk assessment (PRA) techniques during design to achieve extremely low probability of severe core damage. The PRA has already begun and will continue throughout the design and construction of the reactor. Computer analyses will be conducted for a complete spectrum of accidental events, from anticipated events to very infrequent occurrences. 8 refs., 2 tabs

  18. The Advanced Neutron Source safety approach and plans

    International Nuclear Information System (INIS)

    Harrington, R.M.

    1990-01-01

    The Advanced Neutron Source (ANS) is a user facility proposed for construction at the Oak Ridge National Laboratory for all areas of neutron research. The neutron source is planned to be a 350-MW research reactor. The reactor, currently in conceptual design, will belong to the United States Department of Energy (USDOE). The safety approach and planned elements of the safety program for the ANS are described. The safety approach is to incorporate USDOE requirements (which, by reference, include appropriate requirements from the United States Nuclear Regulatory Commission (USNRC) and other national and state regulatory agencies) into the design, and to utilize probabilistic risk assessment (PRA) techniques during design to achieve extremely low probability of severe core damage. The PRA has already begun and will continue throughout the design and construction of the reactor. Computer analyses will be conducted for a complete spectrum of accidental events, from anticipated events to very infrequent occurrences

  19. A multivariate-utility approach for selection of energy sources

    International Nuclear Information System (INIS)

    Ahmed, S; Husseiny, A.A.

    1978-01-01

    A deterministic approach is devised to compare the safety features of various energy sources. The approach is based on multiattribute utility theory. The method is used in evaluating the safety aspects of alternative energy sources used for the production of electrical energy. Four alternative energy sources are chosen which could be considered for the production of electricity to meet the national energy demand. These are nuclear, coal, solar, and geothermal energy. For simplicity, a total electrical system is considered in each case. A computer code is developed to evaluate the overall utility function for each alternative from the utility patterns corresponding to 23 energy attributes, mostly related to safety. The model can accommodate other attributes assuming that these are independent. The technique is kept flexible so that virtually any decision problem with various attributes can be attacked and optimal decisions can be reached. The selected data resulted in preference of geothermal and nuclear energy over other sources, and the method is found viable in making decisions on energy uses based on quantified and subjective attributes. (author)

  20. Stepped-irradiation SAR: A viable approach to circumvent OSL equivalent dose underestimation in last glacial loess of northwestern China

    International Nuclear Information System (INIS)

    Qin, J.T.; Zhou, L.P.

    2009-01-01

    The equivalent dose (D e ) obtained with the continuous irradiation SAR (CI-SAR) protocol for fine-grained quartz from loess of northwestern China is found to be lower than the expected value for samples older than 70 ka based on the regional stratigraphy. This is attributed to the difference in the response of the quartz to natural radiation and laboratory beta irradiation whose rates vary by ∼10 8 times. A stepped irradiation SAR protocol was employed to evaluate the influence of such a 'dose rate effect' on the equivalent dose determination. After investigating the effects of thermal treatment and 'unit-dose' on OSL signal and D e , we refined the stepped irradiation strategy with a 'unit-dose' of ∼25 Gy and successive thermal treatments at 250 deg. C for 10 s, and applied it to the SAR protocol. This stepped irradiation SAR (SI-SAR) protocol led to a 20%-70% increase in D e value for loess deposited during the early last glacial period.

  1. Simulation of equivalent dose due to accidental electron beam loss in Indus-1 and Indus-2 synchrotron radiation sources using FLUKA code

    International Nuclear Information System (INIS)

    Sahani, P.K.; Dev, Vipin; Singh, Gurnam; Haridas, G.; Thakkar, K.K.; Sarkar, P.K.; Sharma, D.N.

    2008-01-01

    Indus-1 and Indus-2 are two Synchrotron radiation sources at Raja Ramanna Centre for Advanced Technology (RRCAT), India. Stored electron energy in Indus-1 and Indus-2 are 450MeV and 2.5GeV respectively. During operation of storage ring, accidental electron beam loss may occur in addition to normal beam losses. The Bremsstrahlung radiation produced due to the beam losses creates a major radiation hazard in these high energy electron accelerators. FLUKA, the Monte Carlo radiation transport code is used to simulate the accidental beam loss. The simulation was carried out to estimate the equivalent dose likely to be received by a trapped person closer to the storage ring. Depth dose profile in water phantom for 450MeV and 2.5GeV electron beam is generated, from which percentage energy absorbed in 30cm water phantom (analogous to human body) is calculated. The simulation showed the percentage energy deposition in the phantom is about 19% for 450MeV electron and 4.3% for 2.5GeV electron. The dose build up factor in 30cm water phantom for 450MeV and 2.5GeV electron beam are found to be 1.85 and 2.94 respectively. Based on the depth dose profile, dose equivalent index of 0.026Sv and 1.08Sv are likely to be received by the trapped person near the storage ring in Indus-1 and Indus-2 respectively. (author)

  2. Compliance Groundwater Monitoring of Nonpoint Sources - Emerging Approaches

    Science.gov (United States)

    Harter, T.

    2008-12-01

    Groundwater monitoring networks are typically designed for regulatory compliance of discharges from industrial sites. There, the quality of first encountered (shallow-most) groundwater is of key importance. Network design criteria have been developed for purposes of determining whether an actual or potential, permitted or incidental waste discharge has had or will have a degrading effect on groundwater quality. The fundamental underlying paradigm is that such discharge (if it occurs) will form a distinct contamination plume. Networks that guide (post-contamination) mitigation efforts are designed to capture the shape and dynamics of existing, finite-scale plumes. In general, these networks extend over areas less than one to ten hectare. In recent years, regulatory programs such as the EU Nitrate Directive and the U.S. Clean Water Act have forced regulatory agencies to also control groundwater contamination from non-incidental, recharging, non-point sources, particularly agricultural sources (fertilizer, pesticides, animal waste application, biosolids application). Sources and contamination from these sources can stretch over several tens, hundreds, or even thousands of square kilometers with no distinct plumes. A key question in implementing monitoring programs at the local, regional, and national level is, whether groundwater monitoring can be effectively used as a landowner compliance tool, as is currently done at point-source sites. We compare the efficiency of such traditional site-specific compliance networks in nonpoint source regulation with various designs of regional nonpoint source monitoring networks that could be used for compliance monitoring. We discuss advantages and disadvantages of the site vs. regional monitoring approaches with respect to effectively protecting groundwater resources impacted by nonpoint sources: Site-networks provide a tool to enforce compliance by an individual landowner. But the nonpoint source character of the contamination

  3. A STATISTICAL APPROACH TO RECOGNIZING SOURCE CLASSES FOR UNASSOCIATED SOURCES IN THE FIRST FERMI-LAT CATALOG

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, M. [Deutsches Elektronen Synchrotron DESY, D-15738 Zeuthen (Germany); Ajello, M.; Allafort, A.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Borgland, A. W.; Buehler, R. [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Antolini, E.; Bonamente, E. [Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, I-06123 Perugia (Italy); Baldini, L.; Bellazzini, R.; Bregeon, J. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Ballet, J. [Laboratoire AIM, CEA-IRFU/CNRS/Universite Paris Diderot, Service d' Astrophysique, CEA Saclay, 91191 Gif sur Yvette (France); Barbiellini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Bastieri, D. [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova (Italy); Bouvier, A. [Santa Cruz Institute for Particle Physics, Department of Physics and Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Brandt, T. J. [CNRS, IRAP, F-31028 Toulouse Cedex 4 (France); Brigida, M. [Dipartimento di Fisica ' M. Merlin' dell' Universita e del Politecnico di Bari, I-70126 Bari (Italy); Bruel, P., E-mail: monzani@slac.stanford.edu, E-mail: vilchez@cesr.fr, E-mail: salvetti@lambrate.inaf.it, E-mail: elizabeth.c.ferrara@nasa.gov [Laboratoire Leprince-Ringuet, Ecole polytechnique, CNRS/IN2P3, Palaiseau (France); and others

    2012-07-01

    The Fermi Large Area Telescope (LAT) First Source Catalog (1FGL) provided spatial, spectral, and temporal properties for a large number of {gamma}-ray sources using a uniform analysis method. After correlating with the most-complete catalogs of source types known to emit {gamma} rays, 630 of these sources are 'unassociated' (i.e., have no obvious counterparts at other wavelengths). Here, we employ two statistical analyses of the primary {gamma}-ray characteristics for these unassociated sources in an effort to correlate their {gamma}-ray properties with the active galactic nucleus (AGN) and pulsar populations in 1FGL. Based on the correlation results, we classify 221 AGN-like and 134 pulsar-like sources in the 1FGL unassociated sources. The results of these source 'classifications' appear to match the expected source distributions, especially at high Galactic latitudes. While useful for planning future multiwavelength follow-up observations, these analyses use limited inputs, and their predictions should not be considered equivalent to 'probable source classes' for these sources. We discuss multiwavelength results and catalog cross-correlations to date, and provide new source associations for 229 Fermi-LAT sources that had no association listed in the 1FGL catalog. By validating the source classifications against these new associations, we find that the new association matches the predicted source class in {approx}80% of the sources.

  4. A STATISTICAL APPROACH TO RECOGNIZING SOURCE CLASSES FOR UNASSOCIATED SOURCES IN THE FIRST FERMI-LAT CATALOG

    International Nuclear Information System (INIS)

    Ackermann, M.; Ajello, M.; Allafort, A.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Borgland, A. W.; Buehler, R.; Antolini, E.; Bonamente, E.; Baldini, L.; Bellazzini, R.; Bregeon, J.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bouvier, A.; Brandt, T. J.; Brigida, M.; Bruel, P.

    2012-01-01

    The Fermi Large Area Telescope (LAT) First Source Catalog (1FGL) provided spatial, spectral, and temporal properties for a large number of γ-ray sources using a uniform analysis method. After correlating with the most-complete catalogs of source types known to emit γ rays, 630 of these sources are 'unassociated' (i.e., have no obvious counterparts at other wavelengths). Here, we employ two statistical analyses of the primary γ-ray characteristics for these unassociated sources in an effort to correlate their γ-ray properties with the active galactic nucleus (AGN) and pulsar populations in 1FGL. Based on the correlation results, we classify 221 AGN-like and 134 pulsar-like sources in the 1FGL unassociated sources. The results of these source 'classifications' appear to match the expected source distributions, especially at high Galactic latitudes. While useful for planning future multiwavelength follow-up observations, these analyses use limited inputs, and their predictions should not be considered equivalent to 'probable source classes' for these sources. We discuss multiwavelength results and catalog cross-correlations to date, and provide new source associations for 229 Fermi-LAT sources that had no association listed in the 1FGL catalog. By validating the source classifications against these new associations, we find that the new association matches the predicted source class in ∼80% of the sources.

  5. Use of a "Super-child" Approach to Assess the Vitamin A Equivalence of Moringa oleifera Leaves, Develop a Compartmental Model for Vitamin A Kinetics, and Estimate Vitamin A Total Body Stores in Young Mexican Children.

    Science.gov (United States)

    Lopez-Teros, Veronica; Ford, Jennifer Lynn; Green, Michael H; Tang, Guangwen; Grusak, Michael A; Quihui-Cota, Luis; Muzhingi, Tawanda; Paz-Cassini, Mariela; Astiazaran-Garcia, Humberto

    2017-12-01

    Background: Worldwide, an estimated 250 million children children. Methods: β-Carotene was intrinsically labeled by growing MO plants in a 2 H 2 O nutrient solution. Fifteen well-nourished children (17-35 mo old) consumed puréed MO leaves (1 mg β-carotene) and a reference dose of [ 13 C 10 ]retinyl acetate (1 mg) in oil. Blood (2 samples/child) was collected 10 times (2 or 3 children each time) over 35 d. The bioefficacy of MO leaves was calculated from areas under the composite "super-child" plasma isotope response curves, and MO VA equivalence was estimated through the use of these values; a compartmental model was developed to predict VA TBS and retinol kinetics through the use of composite plasma [ 13 C 10 ]retinol data. TBS were also estimated with isotope dilution. Results: The relative bioefficacy of β-carotene retinol activity equivalents from MO was 28%; VA equivalence was 3.3:1 by weight (0.56 μmol retinol:1 μmol β-carotene). Kinetics of plasma retinol indicate more rapid plasma appearance and turnover and more extensive recycling in these children than are observed in adults. Model-predicted mean TBS (823 μmol) was similar to values predicted using a retinol isotope dilution equation applied to data from 3 to 6 d after dosing (mean ± SD: 832 ± 176 μmol; n = 7). Conclusions: The super-child approach can be used to estimate population carotenoid bioefficacy and VA equivalence, VA status, and parameters of retinol metabolism from a composite data set. Our results provide initial estimates of retinol kinetics in well-nourished young children with adequate VA stores and demonstrate that MO leaves may be an important source of VA. © 2017 American Society for Nutrition.

  6. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  7. Is surgeon intuition equivalent to models of operative complexity in determining the surgical approach for nephron sparing surgery?

    Directory of Open Access Journals (Sweden)

    Pranav Sharma

    2016-01-01

    Conclusions: RENAL nephrometry score was associated with surgical approach intuitively chosen by an experienced surgeon, but the presence of adherent perinephric fat did not correlate with decision-making.

  8. The cyclophosphamide equivalent dose as an approach for quantifying alkylating agent exposure: a report from the Childhood Cancer Survivor Study.

    Science.gov (United States)

    Green, Daniel M; Nolan, Vikki G; Goodman, Pamela J; Whitton, John A; Srivastava, DeoKumar; Leisenring, Wendy M; Neglia, Joseph P; Sklar, Charles A; Kaste, Sue C; Hudson, Melissa M; Diller, Lisa R; Stovall, Marilyn; Donaldson, Sarah S; Robison, Leslie L

    2014-01-01

    Estimation of the risk of adverse long-term outcomes such as second malignant neoplasms and infertility often requires reproducible quantification of exposures. The method for quantification should be easily utilized and valid across different study populations. The widely used Alkylating Agent Dose (AAD) score is derived from the drug dose distribution of the study population and thus cannot be used for comparisons across populations as each will have a unique distribution of drug doses. We compared the performance of the Cyclophosphamide Equivalent Dose (CED), a unit for quantifying alkylating agent exposure independent of study population, to the AAD. Comparisons included associations from three Childhood Cancer Survivor Study (CCSS) outcome analyses, receiver operator characteristic (ROC) curves and goodness of fit based on the Akaike's Information Criterion (AIC). The CED and AAD performed essentially identically in analyses of risk for pregnancy among the partners of male CCSS participants, risk for adverse dental outcomes among all CCSS participants and risk for premature menopause among female CCSS participants, based on similar associations, lack of statistically significant differences between the areas under the ROC curves and similar model fit values for the AIC between models including the two measures of exposure. The CED is easily calculated, facilitating its use for patient counseling. It is independent of the drug dose distribution of a particular patient population, a characteristic that will allow direct comparisons of outcomes among epidemiological cohorts. We recommend the use of the CED in future research assessing cumulative alkylating agent exposure. © 2013 Wiley Periodicals, Inc.

  9. On the evaluation of the efficacy of a smart damper: a new equivalent energy-based probabilistic approach

    International Nuclear Information System (INIS)

    Aly, A M; Christenson, R E

    2008-01-01

    Smart damping technology has been proposed to protect civil structures from dynamic loads. Each application of smart damping control provides varying levels of performance relative to active and passive control strategies. Currently, researchers compare the relative efficacy of smart damping control to active and passive strategies by running numerous simulations. These simulations can require significant computation time and resources. Because of this, it is desirable to develop an approach to assess the applicability of smart damping technology which requires less computation time. This paper discusses and verifies a probabilistic approach to determine the efficacy of smart damping technology based on clipped optimal state feedback control theory

  10. Bioelectromagnetic forward problem: isolated source approach revis(it)ed.

    Science.gov (United States)

    Stenroos, M; Sarvas, J

    2012-06-07

    Electro- and magnetoencephalography (EEG and MEG) are non-invasive modalities for studying the electrical activity of the brain by measuring voltages on the scalp and magnetic fields outside the head. In the forward problem of EEG and MEG, the relationship between the neural sources and resulting signals is characterized using electromagnetic field theory. This forward problem is commonly solved with the boundary-element method (BEM). The EEG forward problem is numerically challenging due to the low relative conductivity of the skull. In this work, we revise the isolated source approach (ISA) that enables the accurate, computationally efficient BEM solution of this problem. The ISA is formulated for generic basis and weight functions that enable the use of Galerkin weighting. The implementation of the ISA-formulated linear Galerkin BEM (LGISA) is first verified in spherical geometry. Then, the LGISA is compared with conventional Galerkin and symmetric BEM approaches in a realistic 3-shell EEG/MEG model. The results show that the LGISA is a state-of-the-art method for EEG/MEG forward modeling: the ISA formulation increases the accuracy and decreases the computational load. Contrary to some earlier studies, the results show that the ISA increases the accuracy also in the computation of magnetic fields.

  11. Equivalence of ADM Hamiltonian and Effective Field Theory approaches at next-to-next-to-leading order spin1-spin2 coupling of binary inspirals

    Energy Technology Data Exchange (ETDEWEB)

    Levi, Michele [Institut d' Astrophysique de Paris, Université Pierre et Marie Curie, CNRS-UMR 7095, 98 bis Boulevard Arago, 75014 Paris (France); Steinhoff, Jan, E-mail: michele.levi@upmc.fr, E-mail: jan.steinhoff@ist.utl.pt [Centro Multidisciplinar de Astrofisica, Instituto Superior Tecnico, Universidade de Lisboa, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2014-12-01

    The next-to-next-to-leading order spin1-spin2 potential for an inspiralling binary, that is essential for accuracy to fourth post-Newtonian order, if both components in the binary are spinning rapidly, has been recently derived independently via the ADM Hamiltonian and the Effective Field Theory approaches, using different gauges and variables. Here we show the complete physical equivalence of the two results, thereby we first prove the equivalence of the ADM Hamiltonian and the Effective Field Theory approaches at next-to-next-to-leading order with the inclusion of spins. The main difficulty in the spinning sectors, which also prescribes the manner in which the comparison of the two results is tackled here, is the existence of redundant unphysical spin degrees of freedom, associated with the spin gauge choice of a point within the extended spinning object for its representative worldline. After gauge fixing and eliminating the unphysical degrees of freedom of the spin and its conjugate at the level of the action, we arrive at curved spacetime generalizations of the Newton-Wigner variables in closed form, which can also be used to obtain further Hamiltonians, based on an Effective Field Theory formulation and computation. Finally, we make use of our validated result to provide gauge invariant relations among the binding energy, angular momentum, and orbital frequency of an inspiralling binary with generic compact spinning components to fourth post-Newtonian order, including all known sectors up to date.

  12. Equivalent CTOD concept based on the local approach and its application to fracture performance evaluation of welded joints; Local approach ni motozuku toka CTOD gainen no teian to tsugite hakai seino hyoka eno oyo

    Energy Technology Data Exchange (ETDEWEB)

    Ohata, M.; Minami, F.; Toyoda, M. [Osaka University, Osaka (Japan). Faculty of Engineering; Tanaka, T.; Arimochi, K. [Sumitomo Metal Industries, Ltd., Osaka (Japan); Glover, A. [Nova Gas Transmission Ltd., Calgary (Canada); North, T. [University of Toronto (Canada)

    1996-12-31

    A proposal was given on an equivalent crack tip opening displacement (CTOD) concept which relates quantitatively fracture performance of a structural member with the result of a three-point bending CTOD test via the Weibull stress based on a local approach. The equivalent CTOD is defined as a CTOD in which a three-point CTOD test piece and a structural member provide the same Weibull stress. Experimental and analytical discussions were performed on X80 steel welded joints. Effectiveness of the equivalent CTOD concept was verified from the fact that the fracture performance in welded joints with a large width estimated from the result of the three-point bending CTOD test using the equivalent CTOD concept showed good correspondence with the fracture performance obtained in the experiments. On the other hand, the result of estimation using the conventional CTOD concept is considerably smaller than measurements. As an application of the equivalent CTOD concept, a new determination procedure was introduced on required fracture tenacity to ensure deformation performance required on structural elements. The required CTOD value shows a trend that the smaller the ratio of yield stress of the welded metals to that in the base material, the greater the required CTOD grows. 16 refs., 19 figs., 3 tabs.

  13. Equivalent CTOD concept based on the local approach and its application to fracture performance evaluation of welded joints; Local approach ni motozuku toka CTOD gainen no teian to tsugite hakai seino hyoka eno oyo

    Energy Technology Data Exchange (ETDEWEB)

    Ohata, M; Minami, F; Toyoda, M [Osaka University, Osaka (Japan). Faculty of Engineering; Tanaka, T; Arimochi, K [Sumitomo Metal Industries, Ltd., Osaka (Japan); Glover, A [Nova Gas Transmission Ltd., Calgary (Canada); North, T [University of Toronto (Canada)

    1997-12-31

    A proposal was given on an equivalent crack tip opening displacement (CTOD) concept which relates quantitatively fracture performance of a structural member with the result of a three-point bending CTOD test via the Weibull stress based on a local approach. The equivalent CTOD is defined as a CTOD in which a three-point CTOD test piece and a structural member provide the same Weibull stress. Experimental and analytical discussions were performed on X80 steel welded joints. Effectiveness of the equivalent CTOD concept was verified from the fact that the fracture performance in welded joints with a large width estimated from the result of the three-point bending CTOD test using the equivalent CTOD concept showed good correspondence with the fracture performance obtained in the experiments. On the other hand, the result of estimation using the conventional CTOD concept is considerably smaller than measurements. As an application of the equivalent CTOD concept, a new determination procedure was introduced on required fracture tenacity to ensure deformation performance required on structural elements. The required CTOD value shows a trend that the smaller the ratio of yield stress of the welded metals to that in the base material, the greater the required CTOD grows. 16 refs., 19 figs., 3 tabs.

  14. In-orbit calibration approach of the MICROSCOPE experiment for the test of the equivalence principle at 10-15

    International Nuclear Information System (INIS)

    Pradels, Gregory; Touboul, Pierre

    2003-01-01

    The MICROSCOPE mission is a space experiment of fundamental physics which aims to test the equality between the gravitational and inertial mass with a 10 -15 accuracy. Considering these scientific objectives, very weak accelerations have to be controlled and measured in orbit. By modelling the expected acceleration signals applied to the MICROSCOPE instrument in orbit, the developed analytic model of the mission measurement shows the requirements for instrument calibration. Because of on-ground perturbations, the instrument cannot be calibrated in the laboratory and an in-orbit procedure has to be defined. The proposed approach exploits the drag-free system of the satellite and is an important element of the future data analysis of the MICROSCOPE space experiment

  15. Radioactive waste equivalence

    International Nuclear Information System (INIS)

    Orlowski, S.; Schaller, K.H.

    1990-01-01

    The report reviews, for the Member States of the European Community, possible situations in which an equivalence concept for radioactive waste may be used, analyses the various factors involved, and suggests guidelines for the implementation of such a concept. Only safety and technical aspects are covered. Other aspects such as commercial ones are excluded. Situations where the need for an equivalence concept has been identified are processes where impurities are added as a consequence of the treatment and conditioning process, the substitution of wastes from similar waste streams due to the treatment process, and exchange of waste belonging to different waste categories. The analysis of factors involved and possible ways for equivalence evaluation, taking into account in particular the chemical, physical and radiological characteristics of the waste package, and the potential risks of the waste form, shows that no simple all-encompassing equivalence formula may be derived. Consequently, a step-by-step approach is suggested, which avoids complex evaluations in the case of simple exchanges

  16. A simplified approach to evaluating severe accident source term for PWR

    International Nuclear Information System (INIS)

    Huang, Gaofeng; Tong, Lili; Cao, Xuewu

    2014-01-01

    Highlights: • Traditional source term evaluation approaches have been studied. • A simplified approach of source term evaluation for 600 MW PWR is studied. • Five release categories are established. - Abstract: For early design of NPPs, no specific severe accident source term evaluation was considered. Some general source terms have been used for some NPPs. In order to implement a best estimate, a special source term evaluation should be implemented for an NPP. Traditional source term evaluation approaches (mechanism approach and parametric approach) have some difficulties associated with their implementation. The traditional approaches are not consistent with cost-benefit assessment. A simplified approach for evaluating severe accident source term for PWR is studied. For the simplified approach, a simplified containment event tree is established. According to representative cases selection, weighted coefficient evaluation, computation of representative source term cases and weighted computation, five containment release categories are established, including containment bypass, containment isolation failure, containment early failure, containment late failure and intact containment

  17. Community Response to Multiple Sound Sources: Integrating Acoustic and Contextual Approaches in the Analysis

    Directory of Open Access Journals (Sweden)

    Peter Lercher

    2017-06-01

    Full Text Available Sufficient data refer to the relevant prevalence of sound exposure by mixed traffic sources in many nations. Furthermore, consideration of the potential effects of combined sound exposure is required in legal procedures such as environmental health impact assessments. Nevertheless, current practice still uses single exposure response functions. It is silently assumed that those standard exposure-response curves accommodate also for mixed exposures—although some evidence from experimental and field studies casts doubt on this practice. The ALPNAP-study population (N = 1641 shows sufficient subgroups with combinations of rail-highway, highway-main road and rail-highway-main road sound exposure. In this paper we apply a few suggested approaches of the literature to investigate exposure-response curves and its major determinants in the case of exposure to multiple traffic sources. Highly/moderate annoyance and full scale mean annoyance served as outcome. The results show several limitations of the current approaches. Even facing the inherent methodological limitations (energy equivalent summation of sound, rating of overall annoyance the consideration of main contextual factors jointly occurring with the sources (such as vibration, air pollution or coping activities and judgments of the wider area soundscape increases the variance explanation from up to 8% (bivariate, up to 15% (base adjustments up to 55% (full contextual model. The added predictors vary significantly, depending on the source combination. (e.g., significant vibration effects with main road/railway, not highway. Although no significant interactions were found, the observed additive effects are of public health importance. Especially in the case of a three source exposure situation the overall annoyance is already high at lower levels and the contribution of the acoustic indicators is small compared with the non-acoustic and contextual predictors. Noise mapping needs to go down to

  18. Politico-economic equivalence

    DEFF Research Database (Denmark)

    Gonzalez Eiras, Martin; Niepelt, Dirk

    2015-01-01

    Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime and a st......Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime...... their use in the context of several applications, relating to social security reform, tax-smoothing policies and measures to correct externalities....

  19. Transfer of analytical procedures: a panel of strategies selected for risk management, with emphasis on an integrated equivalence-based comparative testing approach.

    Science.gov (United States)

    Agut, C; Caron, A; Giordano, C; Hoffman, D; Ségalini, A

    2011-09-10

    In 2001, a multidisciplinary team made of analytical scientists and statisticians at Sanofi-aventis has published a methodology which has governed, from that time, the transfers from R&D sites to Manufacturing sites of the release monographs. This article provides an overview of the recent adaptations brought to this original methodology taking advantage of our experience and the new regulatory framework, and, in particular, the risk management perspective introduced by ICH Q9. Although some alternate strategies have been introduced in our practices, the comparative testing one, based equivalence testing as statistical approach, remains the standard for assays lying on very critical quality attributes. This is conducted with the concern to control the most important consumer's risk involved at two levels in analytical decisions in the frame of transfer studies: risk, for the receiving laboratory, to take poor release decisions with the analytical method and risk, for the sending laboratory, to accredit such a receiving laboratory on account of its insufficient performances with the method. Among the enhancements to the comparative studies, the manuscript presents the process settled within our company for a better integration of the transfer study into the method life-cycle, just as proposals of generic acceptance criteria and designs for assay and related substances methods. While maintaining rigor and selectivity of the original approach, these improvements tend towards an increased efficiency in the transfer operations. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Editorial: New operational dose equivalent quantities

    International Nuclear Information System (INIS)

    Harvey, J.R.

    1985-01-01

    The ICRU Report 39 entitled ''Determination of Dose Equivalents Resulting from External Radiation Sources'' is briefly discussed. Four new operational dose equivalent quantities have been recommended in ICRU 39. The 'ambient dose equivalent' and the 'directional dose equivalent' are applicable to environmental monitoring and the 'individual dose equivalent, penetrating' and the 'individual dose equivalent, superficial' are applicable to individual monitoring. The quantities should meet the needs of day-to-day operational practice, while being acceptable to those concerned with metrological precision, and at the same time be used to give effective control consistent with current perceptions of the risks associated with exposure to ionizing radiations. (U.K.)

  1. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1987-11-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. Critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 [1] methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed. The effective dose equivalent determined using ICRP-26 methods is significantly smaller than the dose equivalent determined by traditional methods. No existing personnel dosimeter or health physics instrument can determine effective dose equivalent. At the present time, the conversion of dosimeter response to dose equivalent is based on calculations for maximal or ''cap'' values using homogeneous spherical or cylindrical phantoms. The evaluated dose equivalent is, therefore, a poor approximation of the effective dose equivalent as defined by ICRP Publication 26. 3 refs., 2 figs., 1 tab

  2. Asymptotically approaching the past: historiography and critical use of sources in art technological source research

    NARCIS (Netherlands)

    Clarke, M.; Kroustallis, S.; Townsend, J.H.; Cenalmor Bruquetas, E.; Stijnman, A.; San Andres Moya, M.

    2008-01-01

    This paper proposes that historiographic methods should be applied during art technological source research. Sources cannot always be used uncritically as being simply reliable records of contemporary workshop practice. The accuracy, date and origin of the technical information embedded within

  3. A New Spin on Teaching Vocabulary: A Source-Based Approach.

    Science.gov (United States)

    Nilsen, Alleen Pace; Nilsen, Don L. F.

    2003-01-01

    Suggests that teachers should try to use a source-based approach to teaching vocabulary. Explains that a source-based approach starts with basic concepts of human languages and then works with lexical and metaphorical extensions of these basic words. Notes that the purpose of this approach is to find groups of words that can be taught as webs and…

  4. Simulating groundwater flow in karst aquifers with distributed parameter models—Comparison of porous-equivalent media and hybrid flow approaches

    Science.gov (United States)

    Kuniansky, Eve L.

    2016-09-22

    been developed that incorporate the submerged conduits as a one-dimensional pipe network within the aquifer rather than as discrete, extremely transmissive features in a porous-equivalent medium; these submerged conduit models are usually referred to as hybrid models and may include the capability to simulate both laminar and turbulent flow in the one-dimensional pipe network. Comparisons of the application of a porous-equivalent media model with and without turbulence (MODFLOW-Conduit Flow Process mode 2 and basic MODFLOW, respectively) and a hybrid (MODFLOW-Conduit Flow Process mode 1) model to the Woodville Karst Plain near Tallahassee, Florida, indicated that for annual, monthly, or seasonal average hydrologic conditions, all methods met calibration criteria (matched observed groundwater levels and average flows). Thus, the increased effort required, such as the collection of data on conduit location, to develop a hybrid model and its increased computational burden, is not necessary for simulation of average hydrologic conditions (non-laminar flow effects on simulated head and spring discharge were minimal). However, simulation of a large storm event in the Woodville Karst Plain with daily stress periods indicated that turbulence is important for matching daily springflow hydrographs. Thus, if matching streamflow hydrographs over a storm event is required, the simulation of non-laminar flow and the location of conduits are required. The main challenge in application of the methods and approaches for developing hybrid models relates to the difficulty of mapping conduit networks or having high-quality datasets to calibrate these models. Additionally, hybrid models have long simulation times, which can preclude the use of parameter estimation for calibration. Simulation of contaminant transport that does not account for preferential flow through conduits or extremely permeable zones in any approach is ill-advised. Simulation results in other karst aquifers or other

  5. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  6. Multiple approaches to microbial source tracking in tropical northern Australia

    KAUST Repository

    Neave, Matthew; Luter, Heidi; Padovan, Anna; Townsend, Simon; Schobben, Xavier; Gibb, Karen

    2014-01-01

    , other potential inputs, such as urban rivers and drains, and surrounding beaches, and used genetic fingerprints from E. coli and enterococci communities, fecal markers and 454 pyrosequencing to track contamination sources. A sewage effluent outfall

  7. SU-F-T-124: Radiation Biological Equivalent Presentations OfLEM-1 and MKM Approaches in the Carbon-Ion Radiotherapy

    International Nuclear Information System (INIS)

    Hsi, W; Jiang, G; Sheng, Y

    2016-01-01

    Purpose: To study the correlations of the radiation biological equivalent doses (BED) along depth and lateral distance between LEM-1 and MKM approaches. Methods: In NIRS-MKM (Microdosimetric Kinetic Model) approach, the prescribed BED, referred as C-Eq, doses aims to present the relative biological effectiveness (RBE) for different energies of carbon-ions on a fixed 10% survival value of HCG cell with respect to convention X-ray. Instead of a fixed 10% survival, the BED doses of LEM-1 (Local Effect Model) approach, referred as X-Eq, aims to present the RBE over the whole survival curve of chordoma-like cell with alpha/beta ratio of 2.0. The relationship of physical doses as a function of C-Eq and X-Eq doses were investigated along depth and lateral distance for various sizes of cubic targets in water irradiated by carbon-ions. Results: At the center of each cubic target, the trends between physical and C-Eq or X-Eq doses can be described by a linear and 2nd order polynomial functions, respectively. Using fit functions can then calculate a scaling factor between C-Eq and X-Eq doses to have similar physical doses. With equalized C-Eq and X-Eq doses at the depth of target center, over- and under-estimated X-Eq to C-Eq are seen for depths before and after the target center, respectively. Near the distal edge along depth, sharp rising of RBE value is observed for X-Eq, but sharp dropping of RBE value is observed for C-Eq. For lateral locations near and just outside 50% dose level, sharp raising of RBE value is also seen for X-Eq, while only minor increasing with fast dropping for C-Eq. Conclusion: An analytical function to model the differences between the CEq and X-Eq doses along depth and lateral distance need to further investigated to explain varied clinic outcome of specific cancers using two different approaches to calculated BED doses.

  8. Sediment sources in the Upper Severn catchment: a fingerprinting approach

    Directory of Open Access Journals (Sweden)

    A. L. Collins

    1997-01-01

    Full Text Available Suspended sediment sources in the Upper Severn catchment are quantified using a composite fingerprinting technique combining statistically-verified signatures with a multivariate mixing model. Composite fingerprints are developed from a suite of diagnostic properties comprising trace metal (Fe, Mn, AI, heavy metal (Cu, Zn, Pb, Cr, Co, Ni, base cation (Na, Mg, Ca, K, organic (C, N, radiometric (137Cs, 210Pb, and other (total P determinands. A numerical mixing model, to compare the fingerprints of contemporary catchment source materials with those of fluvial suspended sediment in transit and those of recent overbank floodplain deposits, provides a means of quantifying present and past sediment sources respectively. Sources are classified in terms of eroding surface soils under different land uses and channel banks. Eroding surface soils are the most important source of the contemporary suspended sediment loads sampled at the Institute of Hydrology flow gauging stations at Plynlimon and at Abermule. The erosion of forest soils, associated with the autumn and winter commercial activities of the Forestry Commission, is particularly evident. Reconstruction of sediment provenance over the recent past using a sediment core from the active river floodpiain at Abermule, in conjunction with a 137Cs chronology, demonstrates the significance of recent phases of afforestation and deforestation for accelerated catchment soil erosion.

  9. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy and Department of Oncology, University of Calgary and Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)

    2012-06-15

    Purpose: To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian Registered-Sign On-Board Imager Registered-Sign (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. Methods: We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian Registered-Sign OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. Results: The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp ({+-}0.2 mm Al and {+-}2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 Multiplication-Sign 5 cm{sup 2} to 40 Multiplication-Sign 40 cm{sup 2}. The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within

  10. The visual in sport history: approaches, methodologies and sources

    OpenAIRE

    Huggins, Mike

    2015-01-01

    Historians of sport now increasingly accept that visual inquiry offers another dimension to social and cultural research into sport and its history. It is complex and its boundaries are rapidly evolving. This overview offers a justification for placing more emphasis on visual approaches and an introduction to the study and interpretation of visual culture in relation to the history of sport. It stresses the importance of adopting a critical approach and the need to be reflective about that cr...

  11. Calculation by the Monte Carlo method of the equivalent dose received by a human fetus from gamma sources localized in the gastrointestinal tract

    International Nuclear Information System (INIS)

    Segreto, V.S.A.

    1979-01-01

    New uterus positions are proposed and worked out in detail to evaluate the exposure of the human fetus to radiation originated in the gastrointestinal-tract during the pregnancy period. In our evaluation each organ in the gastrointestinal-tract namely stomach, small intestine, transverse colon, ascendent colon, descendent colon, sigmoid and rectum was individually considered. Changes in the position of each of these organs were studied as a function of the uterus growth. There were evaluated cases in which the uterus was in three, six and nine month pregnancy for photon energies of 0.02, 0.05, 0.10, 0.50 and 4 MeV. The average equivalent doses (H) of the uterus, in the uterine wall and in each one of the twelve compartiments which we considered as sub-divisions of the uterus were also determined and discussed. (Auhor) [pt

  12. Establishing Substantial Equivalence: Proteomics

    Science.gov (United States)

    Lovegrove, Alison; Salt, Louise; Shewry, Peter R.

    Wheat is a major crop in world agriculture and is consumed after processing into a range of food products. It is therefore of great importance to determine the consequences (intended and unintended) of transgenesis in wheat and whether genetically modified lines are substantially equivalent to those produced by conventional plant breeding. Proteomic analysis is one of several approaches which can be used to address these questions. Two-dimensional PAGE (2D PAGE) remains the most widely available method for proteomic analysis, but is notoriously difficult to reproduce between laboratories. We therefore describe methods which have been developed as standard operating procedures in our laboratory to ensure the reproducibility of proteomic analyses of wheat using 2D PAGE analysis of grain proteins.

  13. Applying open source innovation approaches in developing business innovation

    DEFF Research Database (Denmark)

    Aagaard, Annabeth; Lindgren, Peter

    2015-01-01

    and managed effectively in developing business model innovation. The aim of this paper is therefore to close this research gap and to provide new knowledge within the research field of OI and OI applications. Thus, in the present study we explore the facilitation and management of open source innovation...... in developing business model innovation in the context of an international OI contest across five international case companies. The findings reveal six categories of key antecedents in effective facilitation and management of OI in developing business model innovation.......More and more companies are pursuing continuous innovation through different types of open source innovation and across different partners. The growing interest in open innovation (OI) originates both from the academic community as well as amongst practitioners motivating further investigation...

  14. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  15. Sources of trace elements in total diet. A statistical approach

    International Nuclear Information System (INIS)

    Aras, N.K.; Chatt, A.

    2004-01-01

    Sixteen total diet samples have been collected from two socioeconomic groups in Turkey by duplicate portion techniques. Samples were homogenized with titanium-blade homogenizer, freeze dried and analyzed for their minor and trace elements mostly by neutron activation analysis. Bread and flour samples were also collected from the same regions and analyzed similarly by instrumental neutron activation analysis. Concentrations of more than 25 elements in total diets, bread and flour, and fiber and phytate in total diets have been determined. Daily dietary intakes of these population groups, probable source of elements through correlation coefficients, and enrichment factor calculations have been determined. (author)

  16. Sediment sources in a small agricultural catchment: A composite fingerprinting approach based on the selection of potential sources

    Science.gov (United States)

    Zhou, Huiping; Chang, Weina; Zhang, Longjiang

    2016-08-01

    Fingerprinting techniques have been widely used as a reasonable and reliable means for investigating sediment sources, especially in relatively large catchments in which there are significant differences in surface materials. However, the discrimination power of fingerprint properties for small catchments, in which the surface materials are relatively homogeneous and human interference is marked, may be affected by fragmentary or confused source information. Using fingerprinting techniques can be difficult, and there is still a need for further studies to verify the effectiveness of such techniques in these small catchments. A composite fingerprinting approach was used in this study to investigate the main sources of sediment output, as well as their relative contributions, from a small catchment (30 km2) with high levels of farming and mining activities. The impact of the selection of different potential sediment sources on the derivation of composite fingerprints and its discrimination power were also investigated by comparing the results from different combinations of potential source types. The initial source types and several samples that could cause confusion were adjusted. These adjustments improved the discrimination power of the composite fingerprints. The results showed that the composite fingerprinting approach used in this study had a discriminatory efficiency of 89.2% for different sediment sources and that the model had a mean goodness of fit of 0.90. Cultivated lands were the main sediment source. The sediment contribution of the studied cultivated lands ranged from 39.9% to 87.8%, with a mean of 76.6%, for multiple deposited sediment samples. The mean contribution of woodlands was 21.7%. Overall, the sediment contribution from mining and road areas was relatively low. The selection of potential sources is an important factor in the application of fingerprinting techniques and warrants more attention in future studies, as is the case with other

  17. A Tiered Approach to Evaluating Salinity Sources in Water at Oil and Gas Production Sites.

    Science.gov (United States)

    Paquette, Shawn M; Molofsky, Lisa J; Connor, John A; Walker, Kenneth L; Hopkins, Harley; Chakraborty, Ayan

    2017-09-01

    A suspected increase in the salinity of fresh water resources can trigger a site investigation to identify the source(s) of salinity and the extent of any impacts. These investigations can be complicated by the presence of naturally elevated total dissolved solids or chlorides concentrations, multiple potential sources of salinity, and incomplete data and information on both naturally occurring conditions and the characteristics of potential sources. As a result, data evaluation techniques that are effective at one site may not be effective at another. In order to match the complexity of the evaluation effort to the complexity of the specific site, this paper presents a strategic tiered approach that utilizes established techniques for evaluating and identifying the source(s) of salinity in an efficient step-by-step manner. The tiered approach includes: (1) a simple screening process to evaluate whether an impact has occurred and if the source is readily apparent; (2) basic geochemical characterization of the impacted water resource(s) and potential salinity sources coupled with simple visual and statistical data evaluation methods to determine the source(s); and (3) advanced laboratory analyses (e.g., isotopes) and data evaluation methods to identify the source(s) and the extent of salinity impacts where it was not otherwise conclusive. A case study from the U.S. Gulf Coast is presented to illustrate the application of this tiered approach. © 2017, National Ground Water Association.

  18. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  19. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Hunting for treasures among the Fermi unassociated sources: A multiwavelength approach

    International Nuclear Information System (INIS)

    Acero, F.; Ojha, R.; Donato, D.; Ferrara, E.; Stevens, J.; Edwards, P. G.; Blanchard, J.; Lovell, J. E. J.; Thompson, D. J.

    2013-01-01

    The Fermi Gamma-Ray Space Telescope has been detecting a wealth of sources where the multiwavelength counterpart is either inconclusive or missing altogether. We present a combination of factors that can be used to identify multiwavelength counterparts to these Fermi unassociated sources. This approach was used to select and investigate seven bright, high-latitude unassociated sources with radio, UV, X-ray, and γ-ray observations. As a result, four of these sources are candidates to be active galactic nuclei, and one to be a pulsar, while two do not fit easily into these known categories of sources. The latter pair of extraordinary sources might reveal a new category subclass or a new type of γ-ray emitter. These results altogether demonstrate the power of a multiwavelength approach to illuminate the nature of unassociated Fermi sources.

  1. EQUIVALENCE VERSUS NON-EQUIVALENCE IN ECONOMIC TRANSLATION

    Directory of Open Access Journals (Sweden)

    Cristina, Chifane

    2012-01-01

    Full Text Available This paper aims at highlighting the fact that “equivalence” represents a concept worth revisiting and detailing upon when tackling the translation process of economic texts both from English into Romanian and from Romanian into English. Far from being exhaustive, our analysis will focus upon the problems arising from the lack of equivalence at the word level. Consequently, relevant examples from the economic field will be provided to account for the following types of non-equivalence at word level: culturespecific concepts; the source language concept is not lexicalised in the target language; the source language word is semantically complex; differences in physical and interpersonal perspective; differences in expressive meaning; differences in form; differences in frequency and purpose of using specific forms and the use of loan words in the source text. Likewise, we shall illustrate a number of translation strategies necessary to deal with the afore-mentioned cases of non-equivalence: translation by a more general word (superordinate; translation by a more neutral/less expressive word; translation by cultural substitution; translation using a loan word or loan word plus explanation; translation by paraphrase using a related word; translation by paraphrase using unrelated words; translation by omission and translation by illustration.

  2. The equivalence principle

    International Nuclear Information System (INIS)

    Smorodinskij, Ya.A.

    1980-01-01

    The prerelativistic history of the equivalence principle (EP) is presented briefly. Its role in history of the general relativity theory (G.R.T.) discovery is elucidated. A modern idea states that the ratio of inert and gravitational masses does not differ from 1 at least up to the 12 sign after comma. Attention is paid to the difference of the gravitational field from electromagnetic one. The difference is as follows, the energy of the gravitational field distributed in space is the source of the field. These fields always interact at superposition. Electromagnetic fields from different sources are put together. On the basis of EP it is established the Sun field interact with the Earth gravitational energy in the same way as with any other one. The latter proves the existence of gravitation of the very gravitational field to a heavy body. A problem on gyroscope movement in the Earth gravitational field is presented as a paradox. The calculation has shown that gyroscope at satellite makes a positive precession, and its axis turns in an angle equal to α during a turn of the satellite round the Earth, but because of the space curvature - into the angle two times larger than α. A resulting turn is equal to 3α. It is shown on the EP basis that the polarization plane in any coordinate system does not turn when the ray of light passes in the gravitational field. Together with the historical value of EP noted is the necessity to take into account the requirements claimed by the EP at description of the physical world

  3. Global equivalent magnetization of the oceanic lithosphere

    Science.gov (United States)

    Dyment, J.; Choi, Y.; Hamoudi, M.; Lesur, V.; Thebault, E.

    2015-11-01

    As a by-product of the construction of a new World Digital Magnetic Anomaly Map over oceanic areas, we use an original approach based on the global forward modeling of seafloor spreading magnetic anomalies and their comparison to the available marine magnetic data to derive the first map of the equivalent magnetization over the World's ocean. This map reveals consistent patterns related to the age of the oceanic lithosphere, the spreading rate at which it was formed, and the presence of mantle thermal anomalies which affects seafloor spreading and the resulting lithosphere. As for the age, the equivalent magnetization decreases significantly during the first 10-15 Myr after its formation, probably due to the alteration of crustal magnetic minerals under pervasive hydrothermal alteration, then increases regularly between 20 and 70 Ma, reflecting variations in the field strength or source effects such as the acquisition of a secondary magnetization. As for the spreading rate, the equivalent magnetization is twice as strong in areas formed at fast rate than in those formed at slow rate, with a threshold at ∼40 km/Myr, in agreement with an independent global analysis of the amplitude of Anomaly 25. This result, combined with those from the study of the anomalous skewness of marine magnetic anomalies, allows building a unified model for the magnetic structure of normal oceanic lithosphere as a function of spreading rate. Finally, specific areas affected by thermal mantle anomalies at the time of their formation exhibit peculiar equivalent magnetization signatures, such as the cold Australian-Antarctic Discordance, marked by a lower magnetization, and several hotspots, marked by a high magnetization.

  4. Cryogenic test of the equivalence principle

    International Nuclear Information System (INIS)

    Worden, P.W. Jr.

    1976-01-01

    The weak equivalence principle is the hypothesis that the ratio of internal and passive gravitational mass is the same for all bodies. A greatly improved test of this principle is possible in an orbiting satellite. The most promising experiments for an orbital test are adaptations of the Galilean free-fall experiment and the Eotvos balance. Sensitivity to gravity gradient noise, both from the earth and from the spacecraft, defines a limit to the sensitivity in each case. This limit is generally much worse for an Eotvos balance than for a properly designed free-fall experiment. The difference is related to the difficulty of making a balance sufficiently isoinertial. Cryogenic technology is desirable to take full advantage of the potential sensitivity, but tides in the liquid helium refrigerant may produce a gravity gradient that seriously degrades the ultimate sensitivity. The Eotvos balance appears to have a limiting sensitivity to relative difference of rate of fall of about 2 x 10 -14 in orbit. The free-fall experiment is limited by helium tide to about 10 -15 ; if the tide can be controlled or eliminated the limit may approach 10 -18 . Other limitations to equivalence principle experiments are discussed. An experimental test of some of the concepts involved in the orbital free-fall experiment is continuing. The experiment consists in comparing the motions of test masses levitated in a superconducting magnetic bearing, and is itself a sensitive test of the equivalence principle. At present the levitation magnets, position monitors and control coils have been tested and major noise sources identified. A measurement of the equivalence principle is postponed pending development of a system for digitizing data. The experiment and preliminary results are described

  5. An Open-Source Approach for Catchment's Physiographic Characterization

    Science.gov (United States)

    Di Leo, M.; Di Stefano, M.

    2013-12-01

    A water catchment's hydrologic response is intimately linked to its morphological shape, which is a signature on the landscape of the particular climate conditions that generated the hydrographic basin over time. Furthermore, geomorphologic structures influence hydrologic regimes and land cover (vegetation). For these reasons, a basin's characterization is a fundamental element in hydrological studies. Physiographic descriptors have been extracted manually for long time, but currently Geographic Information System (GIS) tools ease such task by offering a powerful instrument for hydrologists to save time and improve accuracy of result. Here we present a program combining the flexibility of the Python programming language with the reliability of GRASS GIS, which automatically performing the catchment's physiographic characterization. GRASS (Geographic Resource Analysis Support System) is a Free and Open Source GIS, that today can look back on 30 years of successful development in geospatial data management and analysis, image processing, graphics and maps production, spatial modeling and visualization. The recent development of new hydrologic tools, coupled with the tremendous boost in the existing flow routing algorithms, reduced the computational time and made GRASS a complete toolset for hydrological analysis even for large datasets. The tool presented here is a module called r.basin, based on GRASS' traditional nomenclature, where the "r" stands for "raster", and it is available for GRASS version 6.x and more recently for GRASS 7. As input it uses a Digital Elevation Model and the coordinates of the outlet, and, powered by the recently developed r.stream.* hydrological tools, it performs the flow calculation, delimits the basin's boundaries and extracts the drainage network, returning the flow direction and accumulation, the distance to outlet and the hill slopes length maps. Based on those maps, it calculates hydrologically meaningful shape factors and

  6. Behavioural equivalence for infinite systems - Partially decidable!

    DEFF Research Database (Denmark)

    Sunesen, Kim; Nielsen, Mogens

    1996-01-01

    languages with two generalizations based on traditional approaches capturing non-interleaving behaviour, pomsets representing global causal dependency, and locality representing spatial distribution of events. We first study equivalences on Basic Parallel Processes, BPP, a process calculus equivalent...... of processes between BPP and TCSP, not only are the two equivalences different, but one (locality) is decidable whereas the other (pomsets) is not. The decidability result for locality is proved by a reduction to the reachability problem for Petri nets....

  7. Effective dose equivalent

    International Nuclear Information System (INIS)

    Huyskens, C.J.; Passchier, W.F.

    1988-01-01

    The effective dose equivalent is a quantity which is used in the daily practice of radiation protection as well as in the radiation hygienic rules as measure for the health risks. In this contribution it is worked out upon which assumptions this quantity is based and in which cases the effective dose equivalent can be used more or less well. (H.W.)

  8. Characterization of revenue equivalence

    NARCIS (Netherlands)

    Heydenreich, B.; Müller, R.; Uetz, Marc Jochen; Vohra, R.

    2009-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called revenue equivalence. We give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The characterization holds

  9. Characterization of Revenue Equivalence

    NARCIS (Netherlands)

    Heydenreich, Birgit; Müller, Rudolf; Uetz, Marc Jochen; Vohra, Rakesh

    2008-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called \\emph{revenue equivalence}. In this paper we give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The

  10. On the operator equivalents

    International Nuclear Information System (INIS)

    Grenet, G.; Kibler, M.

    1978-06-01

    A closed polynomial formula for the qth component of the diagonal operator equivalent of order k is derived in terms of angular momentum operators. The interest in various fields of molecular and solid state physics of using such a formula in connection with symmetry adapted operator equivalents is outlined

  11. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  12. Equivalent Dynamic Models.

    Science.gov (United States)

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  13. Incorporating Measurement Non-Equivalence in a Cross-Study Latent Growth Curve Analysis.

    Science.gov (United States)

    Flora, David B; Curran, Patrick J; Hussong, Andrea M; Edwards, Michael C

    2008-10-01

    A large literature emphasizes the importance of testing for measurement equivalence in scales that may be used as observed variables in structural equation modeling applications. When the same construct is measured across more than one developmental period, as in a longitudinal study, it can be especially critical to establish measurement equivalence, or invariance, across the developmental periods. Similarly, when data from more than one study are combined into a single analysis, it is again important to assess measurement equivalence across the data sources. Yet, how to incorporate non-equivalence when it is discovered is not well described for applied researchers. Here, we present an item response theory approach that can be used to create scale scores from measures while explicitly accounting for non-equivalence. We demonstrate these methods in the context of a latent curve analysis in which data from two separate studies are combined to create a single longitudinal model spanning several developmental periods.

  14. Associations of dioxins, furans and dioxin-like PCBs with diabetes and pre-diabetes: is the toxic equivalency approach useful?

    Science.gov (United States)

    Everett, Charles J; Thompson, Olivia M

    2012-10-01

    Toxic equivalency factors for dioxins and dioxin-like compounds have been established by the World Health Organization. Toxic equivalency (TEQ) was derived using 6 chlorinated dibenzo-p-dioxins, 9 chlorinated dibenzofurans and 8 polychlorinated biphenyls, in blood, from the 1999-2004 National Health and Nutrition Examination Survey. Relationships of 8 individual chemicals, the number of compounds elevated, and TEQ with pre-diabetes and total diabetes (diagnosed and undiagnosed) were investigated using logistic regressions. For the 8 chemicals analyzed separately, values above the 75th percentile were considered elevated, whereas for the other 15 compounds, values above the maximum limit of detection were considered elevated. Pre-diabetes with glycohemoglobin (A1c) 5.9-6.4% was associated with PCB 126, PCB 118 and having one or more compounds elevated (odds ratio 2.47, 95% CI 1.51-4.06). Pre-diabetes with A1c 5.7-5.8% was not associated with any individual chemical or the number of compounds elevated. Total diabetes was associated with 6 of the 8 individual compounds tested, and was associated with having 4 or more compounds elevated. Toxic equivalency ≥81.58 TEQ fg/g was associated with total diabetes (odds ratio 3.08, 95% CI 1.20-7.90), but was not associated with A1c 5.9-6.4%. Having multiple compounds elevated appeared to be important for total diabetes, whereas for pre-diabetes with A1c 5.9-6.4%, having a single compound elevated appeared most important. Diabetes plus A1c ≥5.9% was associated with 34.16-81.57 TEQ fg/g (odds ratio 2.00, 95% CI 1.06-3.77) and with ≥81.58 TEQ fg/g (odds ratio 2.48, 95% CI 1.21-5.11), indicating that half the population has elevated risk for this combination of conditions. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Total organic carbon, an important tool in an holistic approach to hydrocarbon source fingerprinting

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, P.D.; Burns, W.A.; Page, D.S.; Bence, A.E.; Mankiewicz, P.J.; Brown, J.S.; Douglas, G.S. [Battelle Member Inst., Waltham, MA (United States)

    2002-07-01

    The identification and allocation of multiple hydrocarbon sources in marine sediments is best achieved using an holistic approach. Total organic carbon (TOC) is one important tool that can constrain the contributions of specific sources and rule out incorrect source allocations in cases where inputs are dominated by fossil organic carbon. In a study of the benthic sediments from Prince William Sound (PWS) and the Gulf of Alaska (GOA), we find excellent agreement between measured TOC and TOC calculated from hydrocarbon fingerprint matches of polycyclic aromatic hydrocarbons (PAH) and chemical biomarkers. Confirmation by two such independent source indicators (TOC and fingerprint matches) provides evidence that source allocations determined by the fingerprint matches are robust and that the major TOC sources have been correctly identified. Fingerprint matches quantify the hydrocarbon contributions of various sources to the benthic sediments and the degree of hydrocarbon winnowing by waves and currents. TOC contents are then calculated using source allocation results from fingerprint matches and the TOCs of contributing sources. Comparisons of the actual sediment TOC values and those calculated from source allocation support our earlier published findings that the natural petrogenic hydrocarbon background in sediments in this area comes from eroding Tertiary shales and associated oil seeps along the northern GOA coast and exclude thermally mature area coals from being important contributors to the PWS background due to their high TOC content.

  16. A New Approach for Modeling Darrieus-Type Vertical Axis Wind Turbine Rotors Using Electrical Equivalent Circuit Analogy: Basis of Theoretical Formulations and Model Development

    Directory of Open Access Journals (Sweden)

    Pierre Tchakoua

    2015-09-01

    Full Text Available Models are crucial in the engineering design process because they can be used for both the optimization of design parameters and the prediction of performance. Thus, models can significantly reduce design, development and optimization costs. This paper proposes a novel equivalent electrical model for Darrieus-type vertical axis wind turbines (DTVAWTs. The proposed model was built from the mechanical description given by the Paraschivoiu double-multiple streamtube model and is based on the analogy between mechanical and electrical circuits. This work addresses the physical concepts and theoretical formulations underpinning the development of the model. After highlighting the working principle of the DTVAWT, the step-by-step development of the model is presented. For assessment purposes, simulations of aerodynamic characteristics and those of corresponding electrical components are performed and compared.

  17. Approaches to assign security levels for radioactive substances and radiation sources

    International Nuclear Information System (INIS)

    Ivanov, M.V.; Petrovskij, N.P.; Pinchuk, G.N.; Telkov, S.N.; Kuzin, V.V.

    2011-01-01

    The article contains analyzed provisions on categorization of radioactive substances and radiation sources according to the extent of their potential danger. Above provisions are used in the IAEA documents and in Russian regulatory documents for differentiation of regulatory requirements to physical security. It is demonstrated that with the account of possible threats of violators, rules of physical protection of radiation sources and radioactive substances should be amended as regards the approaches to assign their categories and security levels [ru

  18. Generating carbyne equivalents with photoredox catalysis

    Science.gov (United States)

    Wang, Zhaofeng; Herraiz, Ana G.; Del Hoyo, Ana M.; Suero, Marcos G.

    2018-02-01

    Carbon has the unique ability to bind four atoms and form stable tetravalent structures that are prevalent in nature. The lack of one or two valences leads to a set of species—carbocations, carbanions, radicals and carbenes—that is fundamental to our understanding of chemical reactivity. In contrast, the carbyne—a monovalent carbon with three non-bonded electrons—is a relatively unexplored reactive intermediate; the design of reactions involving a carbyne is limited by challenges associated with controlling its extreme reactivity and the lack of efficient sources. Given the innate ability of carbynes to form three new covalent bonds sequentially, we anticipated that a catalytic method of generating carbynes or related stabilized species would allow what we term an ‘assembly point’ disconnection approach for the construction of chiral centres. Here we describe a catalytic strategy that generates diazomethyl radicals as direct equivalents of carbyne species using visible-light photoredox catalysis. The ability of these carbyne equivalents to induce site-selective carbon-hydrogen bond cleavage in aromatic rings enables a useful diazomethylation reaction, which underpins sequencing control for the late-stage assembly-point functionalization of medically relevant agents. Our strategy provides an efficient route to libraries of potentially bioactive molecules through the installation of tailored chiral centres at carbon-hydrogen bonds, while complementing current translational late-stage functionalization processes. Furthermore, we exploit the dual radical and carbene character of the generated carbyne equivalent in the direct transformation of abundant chemical feedstocks into valuable chiral molecules.

  19. Establishing Substantial Equivalence: Transcriptomics

    Science.gov (United States)

    Baudo, María Marcela; Powers, Stephen J.; Mitchell, Rowan A. C.; Shewry, Peter R.

    Regulatory authorities in Western Europe require transgenic crops to be substantially equivalent to conventionally bred forms if they are to be approved for commercial production. One way to establish substantial equivalence is to compare the transcript profiles of developing grain and other tissues of transgenic and conventionally bred lines, in order to identify any unintended effects of the transformation process. We present detailed protocols for transcriptomic comparisons of developing wheat grain and leaf material, and illustrate their use by reference to our own studies of lines transformed to express additional gluten protein genes controlled by their own endosperm-specific promoters. The results show that the transgenes present in these lines (which included those encoding marker genes) did not have any significant unpredicted effects on the expression of endogenous genes and that the transgenic plants were therefore substantially equivalent to the corresponding parental lines.

  20. Effects of Surface BRDF on the OMI Cloud and NO2 Retrievals: A New Approach Based on Geometry-Dependent Lambertian Equivalent Reflectivity (GLER) Derived from MODIS

    Science.gov (United States)

    Vasilkov, Alexander; Qin, Wenhan; Krotkov, Nickolay; Lamsal, Lok; Spurr, Robert; Haffner, David; Joiner, Joanna; Yang, Eun-Su; Marchenko, Sergey

    2017-01-01

    The Ozone Monitoring Instrument (OMI) cloud and NO2 algorithms use a monthly gridded surface reflectivity climatology that does not depend upon the observation geometry. In reality, reflection of incoming direct and diffuse solar light from land or ocean surfaces is sensitive to the sun sensor geometry. This dependence is described by the bidirectional reflectance distribution function (BRDF). To account for the BRDF, we propose to use a new concept of geometry-dependent Lambertian equivalent reflectivity (GLER). Implementation within the existing OMI cloud and NO2 retrieval infrastructure requires changes only to the input surface reflectivity database. GLER is calculated using a vector radiative transfer model with high spatial resolution BRDF information from MODIS over land and the Cox Munk slope distribution over ocean with a contribution from water-leaving radiance. We compare GLER and climatological LER at 466 nm, which is used in the OMI O2-O2cloud algorithm to derive effective cloud fractions. A detailed comparison of the cloud fractions and pressures derived with climatological and GLERs is carried out. GLER and corresponding retrieved cloud products are then used as input to the OMI NO2 algorithm. We find that replacing the climatological OMI-based LERs with GLERs can increase NO2 vertical columns by up to 50 % in highly polluted areas; the differences include both BRDF effects and biases between the MODIS and OMI-based surface reflectance data sets. Only minor changes to NO2 columns (within 5 %) are found over unpolluted and overcast areas.

  1. The principle of equivalence

    International Nuclear Information System (INIS)

    Unnikrishnan, C.S.

    1994-01-01

    Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs

  2. Equivalent Colorings with "Maple"

    Science.gov (United States)

    Cecil, David R.; Wang, Rongdong

    2005-01-01

    Many counting problems can be modeled as "colorings" and solved by considering symmetries and Polya's cycle index polynomial. This paper presents a "Maple 7" program link http://users.tamuk.edu/kfdrc00/ that, given Polya's cycle index polynomial, determines all possible associated colorings and their partitioning into equivalence classes. These…

  3. An integrated approach to assess heavy metal source apportionment in peri-urban agricultural soils

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Ying; Li, Tingqiang; Wu, Chengxian [Ministry of Education Key Laboratory of Environmental Remediation and Ecological Health, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058 (China); He, Zhenli [University of Florida, Institute of Food and Agricultural Sciences, Indian River Research and Education Center, Fort Pierce, FL 34945 (United States); Japenga, Jan; Deng, Meihua [Ministry of Education Key Laboratory of Environmental Remediation and Ecological Health, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058 (China); Yang, Xiaoe, E-mail: xeyang@zju.edu.cn [Ministry of Education Key Laboratory of Environmental Remediation and Ecological Health, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058 (China)

    2015-12-15

    Highlights: • Heavy metal source apportionment was conducted in peri-urban agricultural areas. • Precise and quantified results were obtained by using isotope ratio analysis. • The integration of IRA, GIS, PCA, and CA was proved to be more reliable. • Hg pollution was from the use of organic fertilizers in this area. - Abstract: Three techniques (Isotope Ratio Analysis, GIS mapping, and Multivariate Statistical Analysis) were integrated to assess heavy metal pollution and source apportionment in peri-urban agricultural soils. The soils in the study area were moderately polluted with cadmium (Cd) and mercury (Hg), lightly polluted with lead (Pb), and chromium (Cr). GIS Mapping suggested Cd pollution originates from point sources, whereas Hg, Pb, Cr could be traced back to both point and non-point sources. Principal component analysis (PCA) indicated aluminum (Al), manganese (Mn), nickel (Ni) were mainly inherited from natural sources, while Hg, Pb, and Cd were associated with two different kinds of anthropogenic sources. Cluster analysis (CA) further identified fertilizers, waste water, industrial solid wastes, road dust, and atmospheric deposition as potential sources. Based on isotope ratio analysis (IRA) organic fertilizers and road dusts accounted for 74–100% and 0–24% of the total Hg input, while road dusts and solid wastes contributed for 0–80% and 19–100% of the Pb input. This study provides a reliable approach for heavy metal source apportionment in this particular peri-urban area, with a clear potential for future application in other regions.

  4. An integrated approach to assess heavy metal source apportionment in peri-urban agricultural soils

    International Nuclear Information System (INIS)

    Huang, Ying; Li, Tingqiang; Wu, Chengxian; He, Zhenli; Japenga, Jan; Deng, Meihua; Yang, Xiaoe

    2015-01-01

    Highlights: • Heavy metal source apportionment was conducted in peri-urban agricultural areas. • Precise and quantified results were obtained by using isotope ratio analysis. • The integration of IRA, GIS, PCA, and CA was proved to be more reliable. • Hg pollution was from the use of organic fertilizers in this area. - Abstract: Three techniques (Isotope Ratio Analysis, GIS mapping, and Multivariate Statistical Analysis) were integrated to assess heavy metal pollution and source apportionment in peri-urban agricultural soils. The soils in the study area were moderately polluted with cadmium (Cd) and mercury (Hg), lightly polluted with lead (Pb), and chromium (Cr). GIS Mapping suggested Cd pollution originates from point sources, whereas Hg, Pb, Cr could be traced back to both point and non-point sources. Principal component analysis (PCA) indicated aluminum (Al), manganese (Mn), nickel (Ni) were mainly inherited from natural sources, while Hg, Pb, and Cd were associated with two different kinds of anthropogenic sources. Cluster analysis (CA) further identified fertilizers, waste water, industrial solid wastes, road dust, and atmospheric deposition as potential sources. Based on isotope ratio analysis (IRA) organic fertilizers and road dusts accounted for 74–100% and 0–24% of the total Hg input, while road dusts and solid wastes contributed for 0–80% and 19–100% of the Pb input. This study provides a reliable approach for heavy metal source apportionment in this particular peri-urban area, with a clear potential for future application in other regions.

  5. A Series Expansion Approach to Risk Analysis of an Inventory System with Sourcing

    NARCIS (Netherlands)

    Berkhout, J.; Heidergott, B.F.

    2014-01-01

    In this paper we extend the series expansion approach for uni-chain Markov processes to a special case of finite multi-chains with possible transient states. We will show that multi-chain Markov models arise naturally in simple models such as a single item inventory system with sourcing, i.e., with

  6. BOOK REVIEW OF "ASSESSMENT AND CONTROL OF NONPOINT SOURCE POLLUTION OF AQUATIC ECOSYSTEMS: A PRACTICAL APPROACH"

    Science.gov (United States)

    This book is geared to environmental specialists and planners, heavy on the technical side. It goes beyond tranditional nonpoint source (NPS) approaches which typically only look at stormwater as athe sole NPS pollution driver. There is some overreaching material beyond the conte...

  7. BlueSky ATC Simulator Project : An Open Data and Open Source Approach

    NARCIS (Netherlands)

    Hoekstra, J.M.; Ellerbroek, J.

    2016-01-01

    To advance ATM research as a science, ATM research results should be made more comparable. A possible way to do this is to share tools and data. This paper presents a project that investigates the feasibility of a fully open-source and open-data approach to air traffic simulation. Here, the first of

  8. What Does a Verbal Test Measure? A New Approach to Understanding Sources of Item Difficulty.

    Science.gov (United States)

    Berk, Eric J. Vanden; Lohman, David F.; Cassata, Jennifer Coyne

    Assessing the construct relevance of mental test results continues to present many challenges, and it has proven to be particularly difficult to assess the construct relevance of verbal items. This study was conducted to gain a better understanding of the conceptual sources of verbal item difficulty using a unique approach that integrates…

  9. Correspondences. Equivalence relations

    International Nuclear Information System (INIS)

    Bouligand, G.M.

    1978-03-01

    We comment on sections paragraph 3 'Correspondences' and paragraph 6 'Equivalence Relations' in chapter II of 'Elements de mathematique' by N. Bourbaki in order to simplify their comprehension. Paragraph 3 exposes the ideas of a graph, correspondence and map or of function, and their composition laws. We draw attention to the following points: 1) Adopting the convention of writting from left to right, the composition law for two correspondences (A,F,B), (U,G,V) of graphs F, G is written in full generality (A,F,B)o(U,G,V) = (A,FoG,V). It is not therefore assumed that the co-domain B of the first correspondence is identical to the domain U of the second (EII.13 D.7), (1970). 2) The axiom of choice consists of creating the Hilbert terms from the only relations admitting a graph. 3) The statement of the existence theorem of a function h such that f = goh, where f and g are two given maps having the same domain (of definition), is completed if h is more precisely an injection. Paragraph 6 considers the generalisation of equality: First, by 'the equivalence relation associated with a map f of a set E identical to (x is a member of the set E and y is a member of the set E and x:f = y:f). Consequently, every relation R(x,y) which is equivalent to this is an equivalence relation in E (symmetrical, transitive, reflexive); then R admits a graph included in E x E, etc. Secondly, by means of the Hilbert term of a relation R submitted to the equivalence. In this last case, if R(x,y) is separately collectivizing in x and y, theta(x) is not the class of objects equivalent to x for R (EII.47.9), (1970). The interest of bringing together these two subjects, apart from this logical order, resides also in the fact that the theorem mentioned in 3) can be expressed by means of the equivalence relations associated with the functions f and g. The solutions of the examples proposed reveal their simplicity [fr

  10. Reproduction of nearby sources by imposing true interaural differences on a sound field control approach

    DEFF Research Database (Denmark)

    Badajoz, Javier; Chang, Ji-ho; Agerkvist, Finn T.

    2015-01-01

    In anechoic conditions, the Interaural Level Difference (ILD) is the most significant auditory cue to judge the distance to a sound source located within 1 m of the listener's head. This is due to the unique characteristics of a point source in its near field, which result in exceptionally high...... as Pressure Matching (PM), and a binaural control technique. While PM aims at reproducing the incident sound field, the objective of the binaural control technique is to ensure a correct reproduction of interaural differences. The combination of these two approaches gives rise to the following features: (i......, distance dependent ILDs. When reproducing the sound field of sources located near the head with line or circular arrays of loudspeakers, the reproduced ILDs are generally lower than expected, due to physical limitations. This study presents an approach that combines a sound field reproduction method, known...

  11. Interpretative approaches to identifying sources of hydrocarbons in complex contaminated environments

    International Nuclear Information System (INIS)

    Sauer, T.C.; Brown, J.S.; Boehm, P.D.

    1993-01-01

    Recent advances in analytical instrumental hardware and software have permitted the use of more sophisticated approaches in identifying or fingerprinting sources of hydrocarbons in complex matrix environments. In natural resource damage assessments and contaminated site investigations of both terrestrial and aquatic environments, chemical fingerprinting has become an important interpretative tool. The alkyl homologues of the major polycyclic and heterocyclic aromatic hydrocarbons (e.g., phenanthrenes/anthracenes, dibenzothiophenes, chrysenes) have been found to the most valuable hydrocarbons in differentiating hydrocarbon sources, but there are other hydrocarbon analytes, such as the chemical biomarkers steranes and triterpanes, and alkyl homologues of benzene, and chemical methodologies, such as scanning UV fluorescence, that have been found to be useful in certain environments. This presentation will focus on recent data interpretative approaches for hydrocarbon source identification assessments. Selection of appropriate targets analytes and data quality requirements will be discussed and example cases including the Arabian Gulf War oil spill results will be presented

  12. Field Trials of the Multi-Source Approach for Resistivity and Induced Polarization Data Acquisition

    Science.gov (United States)

    LaBrecque, D. J.; Morelli, G.; Fischanger, F.; Lamoureux, P.; Brigham, R.

    2013-12-01

    Implementing systems of distributed receivers and transmitters for resistivity and induced polarization data is an almost inevitable result of the availability of wireless data communication modules and GPS modules offering precise timing and instrument locations. Such systems have a number of advantages; for example, they can be deployed around obstacles such as rivers, canyons, or mountains which would be difficult with traditional 'hard-wired' systems. However, deploying a system of identical, small, battery powered, transceivers, each capable of injecting a known current and measuring the induced potential has an additional and less obvious advantage in that multiple units can inject current simultaneously. The original purpose for using multiple simultaneous current sources (multi-source) was to increase signal levels. In traditional systems, to double the received signal you inject twice the current which requires you to apply twice the voltage and thus four times the power. Alternatively, one approach to increasing signal levels for large-scale surveys collected using small, battery powered transceivers is it to allow multiple units to transmit in parallel. In theory, using four 400 watt transmitters on separate, parallel dipoles yields roughly the same signal as a single 6400 watt transmitter. Furthermore, implementing the multi-source approach creates the opportunity to apply more complex current flow patterns than simple, parallel dipoles. For a perfect, noise-free system, multi-sources adds no new information to a data set that contains a comprehensive set of data collected using single sources. However, for realistic, noisy systems, it appears that multi-source data can substantially impact survey results. In preliminary model studies, the multi-source data produced such startling improvements in subsurface images that even the authors questioned their veracity. Between December of 2012 and July of 2013, we completed multi-source surveys at five sites

  13. An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach

    KAUST Repository

    Asiri, Sharefa M.

    2013-05-25

    Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However, recently observers have also been developed to estimate some unknowns for systems governed by Partial differential equations. Our aim is to design an observer to solve inverse source problem for a one dimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.

  14. Live tree carbon stock equivalence of fire and fuels extension to the Forest Vegetation Simulator and Forest Inventory and Analysis approaches

    Science.gov (United States)

    James E. Smith; Coeli M. Hoover

    2017-01-01

    The carbon reports in the Fire and Fuels Extension (FFE) to the Forest Vegetation Simulator (FVS) provide two alternate approaches to carbon estimates for live trees (Rebain 2010). These are (1) the FFE biomass algorithms, which are volumebased biomass equations, and (2) the Jenkins allometric equations (Jenkins and others 2003), which are diameter based. Here, we...

  15. A logistic regression approach to model the willingness of consumers to adopt renewable energy sources

    Science.gov (United States)

    Ulkhaq, M. M.; Widodo, A. K.; Yulianto, M. F. A.; Widhiyaningrum; Mustikasari, A.; Akshinta, P. Y.

    2018-03-01

    The implementation of renewable energy in this globalization era is inevitable since the non-renewable energy leads to climate change and global warming; hence, it does harm the environment and human life. However, in the developing countries, such as Indonesia, the implementation of the renewable energy sources does face technical and social problems. For the latter, renewable energy sources implementation is only effective if the public is aware of its benefits. This research tried to identify the determinants that influence consumers’ intention in adopting renewable energy sources. In addition, this research also tried to predict the consumers who are willing to apply the renewable energy sources in their houses using a logistic regression approach. A case study was conducted in Semarang, Indonesia. The result showed that only eight variables (from fifteen) that are significant statistically, i.e., educational background, employment status, income per month, average electricity cost per month, certainty about the efficiency of renewable energy project, relatives’ influence to adopt the renewable energy sources, energy tax deduction, and the condition of the price of the non-renewable energy sources. The finding of this study could be used as a basis for the government to set up a policy towards an implementation of the renewable energy sources.

  16. An integrated approach to assess heavy metal source apportionment in peri-urban agricultural soils.

    Science.gov (United States)

    Huang, Ying; Li, Tingqiang; Wu, Chengxian; He, Zhenli; Japenga, Jan; Deng, Meihua; Yang, Xiaoe

    2015-12-15

    Three techniques (Isotope Ratio Analysis, GIS mapping, and Multivariate Statistical Analysis) were integrated to assess heavy metal pollution and source apportionment in peri-urban agricultural soils. The soils in the study area were moderately polluted with cadmium (Cd) and mercury (Hg), lightly polluted with lead (Pb), and chromium (Cr). GIS Mapping suggested Cd pollution originates from point sources, whereas Hg, Pb, Cr could be traced back to both point and non-point sources. Principal component analysis (PCA) indicated aluminum (Al), manganese (Mn), nickel (Ni) were mainly inherited from natural sources, while Hg, Pb, and Cd were associated with two different kinds of anthropogenic sources. Cluster analysis (CA) further identified fertilizers, waste water, industrial solid wastes, road dust, and atmospheric deposition as potential sources. Based on isotope ratio analysis (IRA) organic fertilizers and road dusts accounted for 74-100% and 0-24% of the total Hg input, while road dusts and solid wastes contributed for 0-80% and 19-100% of the Pb input. This study provides a reliable approach for heavy metal source apportionment in this particular peri-urban area, with a clear potential for future application in other regions. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  18. The equivalence theorem

    International Nuclear Information System (INIS)

    Veltman, H.

    1990-01-01

    The equivalence theorem states that, at an energy E much larger than the vector-boson mass M, the leading order of the amplitude with longitudinally polarized vector bosons on mass shell is given by the amplitude in which these vector bosons are replaced by the corresponding Higgs ghosts. We prove the equivalence theorem and show its validity in every order in perturbation theory. We first derive the renormalized Ward identities by using the diagrammatic method. Only the Feynman-- 't Hooft gauge is discussed. The last step of the proof includes the power-counting method evaluated in the large-Higgs-boson-mass limit, needed to estimate the leading energy behavior of the amplitudes involved. We derive expressions for the amplitudes involving longitudinally polarized vector bosons for all orders in perturbation theory. The fermion mass has not been neglected and everything is evaluated in the region m f ∼M much-lt E much-lt m Higgs

  19. An experimental MOSFET approach to characterize (192)Ir HDR source anisotropy.

    Science.gov (United States)

    Toye, W C; Das, K R; Todd, S P; Kenny, M B; Franich, R D; Johnston, P N

    2007-09-07

    The dose anisotropy around a (192)Ir HDR source in a water phantom has been measured using MOSFETs as relative dosimeters. In addition, modeling using the EGSnrc code has been performed to provide a complete dose distribution consistent with the MOSFET measurements. Doses around the Nucletron 'classic' (192)Ir HDR source were measured for a range of radial distances from 5 to 30 mm within a 40 x 30 x 30 cm(3) water phantom, using a TN-RD-50 MOSFET dosimetry system with an active area of 0.2 mm by 0.2 mm. For each successive measurement a linear stepper capable of movement in intervals of 0.0125 mm re-positioned the MOSFET at the required radial distance, while a rotational stepper enabled angular displacement of the source at intervals of 0.9 degrees . The source-dosimeter arrangement within the water phantom was modeled using the standardized cylindrical geometry of the DOSRZnrc user code. In general, the measured relative anisotropy at each radial distance from 5 mm to 30 mm is in good agreement with the EGSnrc simulations, benchmark Monte Carlo simulation and TLD measurements where they exist. The experimental approach employing a MOSFET detection system of small size, high spatial resolution and fast read out capability allowed a practical approach to the determination of dose anisotropy around a HDR source.

  20. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    Science.gov (United States)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  1. Global sourcing risk management approaches: A study of small clothing and textile retailers in Gauteng

    Directory of Open Access Journals (Sweden)

    Wesley Niemann

    2018-02-01

    Full Text Available Background: Global sourcing has increased as buyers searched for new markets that offered better pricing, quality, variety and delivery lead times than their local markets. However, the increase in global sourcing has also exposed businesses to many supply risks. Purpose: The purpose of this descriptive qualitative study was to explore the global sourcing supply risks encountered by small clothing and textile retailers in Gauteng and to determine what supply risk identification and management approaches they utilise. Method: This study utilised semi-structured interviews conducted with 12 small clothing and textile retail owners. Results: The study found that the three major supply risks encountered by these retailers were fluctuating exchange rates, communication barriers and costly and complicated logistics, which included high customs costs. Furthermore, although aware of the supply risks, none of the small clothing and textile retailers had formal identification and management approaches in place. Instead, risks are dealt with at the sole discretion of the owner as and when they occur. The study also found that informal identification and management approaches were being applied by some of the retailers. These included factoring exchange rate fluctuations into the profit margins and using translators to combat communication barriers. Contribution: The study is one of the first empirical studies conducted on global supply risks and the associated identification and management approaches in the South African small business context, specifically focused on clothing and textile retailers. Conclusion: Small clothing and textile retailers need to proactively identify and manage global sourcing risk using the identified approaches in order to reduce and mitigate potential supply disruptions.

  2. The principle of equivalence reconsidered: assessing the relevance of the principle of equivalence in prison medicine.

    Science.gov (United States)

    Jotterand, Fabrice; Wangmo, Tenzin

    2014-01-01

    In this article we critically examine the principle of equivalence of care in prison medicine. First, we provide an overview of how the principle of equivalence is utilized in various national and international guidelines on health care provision to prisoners. Second, we outline some of the problems associated with its applications, and argue that the principle of equivalence should go beyond equivalence to access and include equivalence of outcomes. However, because of the particular context of the prison environment, third, we contend that the concept of "health" in equivalence of health outcomes needs conceptual clarity; otherwise, it fails to provide a threshold for healthy states among inmates. We accomplish this by examining common understandings of the concepts of health and disease. We conclude our article by showing why the conceptualization of diseases as clinical problems provides a helpful approach in the delivery of health care in prison.

  3. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  4. Foreword: Biomonitoring Equivalents special issue.

    Science.gov (United States)

    Meek, M E; Sonawane, B; Becker, R A

    2008-08-01

    The challenge of interpreting results of biomonitoring for environmental chemicals in humans is highlighted in this Foreword to the Biomonitoring Equivalents (BEs) special issue of Regulatory Toxicology and Pharmacology. There is a pressing need to develop risk-based tools in order to empower scientists and health professionals to interpret and communicate the significance of human biomonitoring data. The BE approach, which integrates dosimetry and risk assessment methods, represents an important advancement on the path toward achieving this objective. The articles in this issue, developed as a result of an expert panel meeting, present guidelines for derivation of BEs, guidelines for communication using BEs and several case studies illustrating application of the BE approach for specific substances.

  5. Equivalence of Electron-Vibration Interaction and Charge-Induced Force Variations: A New O(1 Approach to an Old Problem

    Directory of Open Access Journals (Sweden)

    Tunna Baruah

    2012-04-01

    Full Text Available Calculating electron-vibration (vibronic interaction constants is computationally expensive. For molecules containing N nuclei it involves solving the Schrödinger equation for Ο(3N nuclear configurations in addition to the cost of determining the vibrational modes. We show that quantum vibronic interactions are proportional to the classical atomic forces induced when the total charge of the system is varied. This enables the calculation of vibronic interaction constants from O(1 solutions of the Schrödinger equation. We demonstrate that the O(1 approach produces numerically accurate results by calculating the vibronic interaction constants for several molecules. We investigate the role of molecular vibrations in the Mott transition in κ-(BEDT-TTF2Cu[N(CN2]Br.

  6. The equivalent incidence angle for porous absorbers backed by a hard surface

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Brunskog, Jonas

    2013-01-01

    experiment using a free-field absorption measurement technique with a source at the equivalent angle. This study investigates the equivalent angle for locally and extendedly reacting porous media mainly by a numerical approach: Numerical minimizations of a cost function that is the difference between...... coefficients by free-field techniques, a broad incidence angle range can be suggested: 20 hi65 for extended reaction and hi65 for locally reacting porous absorbers, if an average difference of 0.05 is allowed.......An equivalent incidence angle is defined as the incidence angle at which the oblique incidence absorption coefficient best approximates the random incidence absorption coefficient. Once the equivalent angle is known, the random incidence absorption coefficient can be estimated by a single...

  7. A Bayesian approach to quantify the contribution of animal-food sources to human salmonellosis

    DEFF Research Database (Denmark)

    Hald, Tine; Vose, D.; Wegener, Henrik Caspar

    2004-01-01

    Based on the data from the integrated Danish Salmonella surveillance in 1999, we developed a mathematical model for quantifying the contribution of each of the major animal-food sources to human salmonellosis. The model was set up to calculate the number of domestic and sporadic cases caused...... salmonellosis was also included. The joint posterior distribution was estimated by fitting the model to the reported number of domestic and sporadic cases per Salmonella type in a Bayesian framework using Markov Chain Monte Carlo simulation. The number of domestic and sporadic cases was obtained by subtracting.......8-10.4%) of the cases, respectively. Taken together, imported foods were estimated to account for 11.8% (95% CI: 5.0-19.0%) of the cases. Other food sources considered had only a minor impact, whereas 25% of the cases could not be associated with any source. This approach of quantifying the contribution of the various...

  8. New approach for location of continuously emitting acoustic emission sources by phase-controlled probe arrays

    International Nuclear Information System (INIS)

    Hoeller, P.; Klein, M.; Waschkies, E.; Deuster, G.

    1991-01-01

    Usually burst-like acoustic emission (AE) is localized by triangulation. For continuous AE, e.g. from leakages, this method is not feasible. Therefore a new method for localization of continuous AE has been developed. It is based on a phase-controlled probe array which consists of many single sensor elements. The AE signals received by the different sensor elements are delayed according to their time-of-flight differences from the source to the single elements of the receiver array. By choosing special combinations of time differences between the array elements the directivity pattern of the sensitivity of the array can be changed, e.g. rotated in the plane of a large plate. Thus, the source direction can be determined by one array. Some preliminary experiments with an artificial noise source, positioned on a large steel plate, have been performed and have demonstrated the feasibility of this approach. (orig.)

  9. Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal

    OpenAIRE

    Wronna, M.; Omira, R.; Baptista, M. A.

    2015-01-01

    In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This...

  10. Scenario based approach for multiple source Tsunami Hazard assessment for Sines, Portugal

    OpenAIRE

    M. Wronna; R. Omira; M. A. Baptista

    2015-01-01

    In this paper, we present a scenario-based approach for tsunami hazard assessment for the city and harbour of Sines – Portugal, one of the test-sites of project ASTARTE. Sines holds one of the most important deep-water ports which contains oil-bearing, petrochemical, liquid bulk, coal and container terminals. The port and its industrial infrastructures are facing the ocean southwest towards the main seismogenic sources. This work considers two different seis...

  11. Equivalence, commensurability, value

    DEFF Research Database (Denmark)

    Albertsen, Niels

    2017-01-01

    Deriving value in Capital Marx uses three commensurability arguments (CA1-3). CA1 establishes equivalence in exchange as exchangeability with the same third commodity. CA2 establishes value as common denominator in commodities: embodied abstract labour. CA3 establishes value substance...... as commonality of labour: physiological labour. Tensions between these logics have permeated Marxist interpretations of value. Some have supported value as embodied labour (CA2, 3), others a monetary theory of value and value as ‘pure’ societal abstraction (ultimately CA1). They all are grounded in Marx....

  12. Hydro-mechanically coupled finite-element analysis of the stability of a fractured-rock slope using the equivalent continuum approach: a case study of planned reservoir banks in Blaubeuren, Germany

    Science.gov (United States)

    Song, Jie; Dong, Mei; Koltuk, Serdar; Hu, Hui; Zhang, Luqing; Azzam, Rafig

    2017-12-01

    Construction works associated with the building of reservoirs in mountain areas can damage the stability of adjacent valley slopes. Seepage processes caused by the filling and drawdown operations of reservoirs also affect the stability of the reservoir banks over time. The presented study investigates the stability of a fractured-rock slope subjected to seepage forces in the lower basin of a planned pumped-storage hydropower (PSH) plant in Blaubeuren, Germany. The investigation uses a hydro-mechanically coupled finite-element analyses. For this purpose, an equivalent continuum model is developed by using a representative elementary volume (REV) approach. To determine the minimum required REV size, a large number of discrete fracture networks are generated using Monte Carlo simulations. These analyses give a REV size of 28 × 28 m, which is sufficient to represent the equivalent hydraulic and mechanical properties of the investigated fractured-rock mass. The hydro-mechanically coupled analyses performed using this REV size show that the reservoir operations in the examined PSH plant have negligible effect on the adjacent valley slope.

  13. Hydro-mechanically coupled finite-element analysis of the stability of a fractured-rock slope using the equivalent continuum approach: a case study of planned reservoir banks in Blaubeuren, Germany

    Science.gov (United States)

    Song, Jie; Dong, Mei; Koltuk, Serdar; Hu, Hui; Zhang, Luqing; Azzam, Rafig

    2018-05-01

    Construction works associated with the building of reservoirs in mountain areas can damage the stability of adjacent valley slopes. Seepage processes caused by the filling and drawdown operations of reservoirs also affect the stability of the reservoir banks over time. The presented study investigates the stability of a fractured-rock slope subjected to seepage forces in the lower basin of a planned pumped-storage hydropower (PSH) plant in Blaubeuren, Germany. The investigation uses a hydro-mechanically coupled finite-element analyses. For this purpose, an equivalent continuum model is developed by using a representative elementary volume (REV) approach. To determine the minimum required REV size, a large number of discrete fracture networks are generated using Monte Carlo simulations. These analyses give a REV size of 28 × 28 m, which is sufficient to represent the equivalent hydraulic and mechanical properties of the investigated fractured-rock mass. The hydro-mechanically coupled analyses performed using this REV size show that the reservoir operations in the examined PSH plant have negligible effect on the adjacent valley slope.

  14. Dust Storm over the Middle East: Retrieval Approach, Source Identification, and Trend Analysis

    Science.gov (United States)

    Moridnejad, A.; Karimi, N.; Ariya, P. A.

    2014-12-01

    The Middle East region has been considered to be responsible for approximately 25% of the Earth's global emissions of dust particles. By developing Middle East Dust Index (MEDI) and applying to 70 dust storms characterized on MODIS images and occurred during the period between 2001 and 2012, we herein present a new high resolution mapping of major atmospheric dust source points participating in this region. To assist environmental managers and decision maker in taking proper and prioritized measures, we then categorize identified sources in terms of intensity based on extracted indices for Deep Blue algorithm and also utilize frequency of occurrence approach to find the sensitive sources. In next step, by implementing the spectral mixture analysis on the Landsat TM images (1984 and 2012), a novel desertification map will be presented. The aim is to understand how human perturbations and land-use change have influenced the dust storm points in the region. Preliminary results of this study indicate for the first time that c.a., 39 % of all detected source points are located in this newly anthropogenically desertified area. A large number of low frequency sources are located within or close to the newly desertified areas. These severely desertified regions require immediate concern at a global scale. During next 6 months, further research will be performed to confirm these preliminary results.

  15. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  16. Extended equivalent dipole model for radiated emissions

    OpenAIRE

    Obiekezie, Chijioke S.

    2016-01-01

    This work is on the characterisation of radiated fields from electronic devices. An equivalent dipole approach is used. Previous work showed that this was an effective approach for single layer printed circuit boards where an infinite ground plane can be assumed. In this work, this approach is extended for the characterisation of more complex circuit boards or electronic systems.\\ud For complex electronic radiators with finite ground planes, the main challenge is characterising field diffract...

  17. Waste Determination Equivalency - 12172

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Rebecca D. [Savannah River Remediation (United States)

    2012-07-01

    by the Secretary of Energy in January of 2006 based on proposed processing techniques with the expectation that it could be revised as new processing capabilities became viable. Once signed, however, it became evident that any changes would require lengthy review and another determination signed by the Secretary of Energy. With the maturation of additional salt removal technologies and the extension of the SWPF start-up date, it becomes necessary to define 'equivalency' to the processes laid out in the original determination. For the purposes of SRS, any waste not processed through Interim Salt Processing must be processed through SWPF or an equivalent process, and therefore a clear statement of the requirements for a process to be equivalent to SWPF becomes necessary. (authors)

  18. Self-phase modulation enabled, wavelength-tunable ultrafast fiber laser sources: an energy scalable approach.

    Science.gov (United States)

    Liu, Wei; Li, Chen; Zhang, Zhigang; Kärtner, Franz X; Chang, Guoqing

    2016-07-11

    We propose and demonstrate a new approach to implement a wavelength-tunable ultrafast fiber laser source suitable for multiphoton microscopy. We employ fiber-optic nonlinearities to broaden a narrowband optical spectrum generated by an Yb-fiber laser system and then use optical bandpass filters to select the leftmost or rightmost spectral lobes from the broadened spectrum. Detailed numerical modeling shows that self-phase modulation dominates the spectral broadening, self-steepening tends to blue shift the broadened spectrum, and stimulated Raman scattering is minimal. We also find that optical wave breaking caused by fiber dispersion slows down the shift of the leftmost/rightmost spectral lobes and therefore limits the wavelength tuning range of the filtered spectra. We show both numerically and experimentally that shortening the fiber used for spectral broadening while increasing the input pulse energy can overcome this dispersion-induced limitation; as a result, the filtered spectral lobes have higher power, constituting a powerful and practical approach for energy scaling the resulting femtosecond sources. We use two commercially available photonic crystal fibers to verify the simulation results. More specific, use of 20-mm fiber NL-1050-ZERO-2 enables us to implement an Yb-fiber laser based ultrafast source, delivering femtosecond (70-120 fs) pulses tunable from 825 nm to 1210 nm with >1 nJ pulse energy.

  19. Radiological equivalent of chemical pollutants

    International Nuclear Information System (INIS)

    Medina, V.O.

    1982-01-01

    The development of the peaceful uses of nuclear energy has caused continued effort toward public safety through radiation health protection measures and nuclear management practices. However, concern has not been focused on the development specifically in the operation of chemical pestrochemical industries as well as other industrial processes brought about by technological advancements. This article presents the comparison of the risk of radiation and chemicals. The methods used for comparing the risks of late effects of radiation and chemicals are considered at three levels. (a) as a frame of reference to give an impression of resolving power of biological tests; (b) as methods to quantify risks; (c) as instruments for an epidemiological survey of human populations. There are marked dissimilarities between chemicals and radiation and efforts to interpret chemical activity may not be achieved. Applicability of the concept of rad equivalence has many restrictions and as pointed out this approach is not an established one. (RTD)

  20. Assessment of statistical methods used in library-based approaches to microbial source tracking.

    Science.gov (United States)

    Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D

    2003-12-01

    Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.

  1. Molecular Approaches to Understanding Transmission and Source Attribution in Nontyphoidal Salmonella and Their Application in Africa.

    Science.gov (United States)

    Mather, Alison E; Vaughan, Timothy G; French, Nigel P

    2015-11-01

    Nontyphoidal Salmonella (NTS) is a frequent cause of diarrhea around the world, yet in many African countries it is more commonly associated with invasive bacterial disease. Various source attribution models have been developed that utilize microbial subtyping data to assign cases of human NTS infection to different animal populations and foods of animal origin. Advances in molecular microbial subtyping approaches, in particular whole-genome sequencing, provide higher resolution data with which to investigate these sources. In this review, we provide updates on the source attribution models developed for Salmonella, and examine the application of whole-genome sequencing data combined with evolutionary modeling to investigate the putative sources and transmission pathways of NTS, with a focus on the epidemiology of NTS in Africa. This is essential information to decide where, what, and how control strategies might be applied most effectively. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.

    Science.gov (United States)

    Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle

    2011-05-01

    We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. A Heuristic Approach to Distributed Generation Source Allocation for Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    M. Sharma

    2010-12-01

    Full Text Available The recent trends in electrical power distribution system operation and management are aimed at improving system conditions in order to render good service to the customer. The reforms in distribution sector have given major scope for employment of distributed generation (DG resources which will boost the system performance. This paper proposes a heuristic technique for allocation of distribution generation source in a distribution system. The allocation is determined based on overall improvement in network performance parameters like reduction in system losses, improvement in voltage stability, improvement in voltage profile. The proposed Network Performance Enhancement Index (NPEI along with the heuristic rules facilitate determination of feasible location and corresponding capacity of DG source. The developed approach is tested with different test systems to ascertain its effectiveness.

  4. Alternative sources of power generation, incentives and regulatory mandates: a theoretical approach to the Colombian case

    International Nuclear Information System (INIS)

    Zapata, Carlos M; Zuluaga Monica M; Dyner, Isaac

    2005-01-01

    Alternative Energy Generation Sources are turning relevant in several countries worldwide because of technology improvement and the environmental treatment. In this paper, the most common problems of renewable energy sources are accomplished, different incentives and regulatory mandates from several countries are exposed, and a first theoretical approach to a renewable energies incentive system in Colombia is discussed. The paper is fundamentally in theoretical aspects and international experience in renewable energies incentives to accelerate their diffusion; features are analyzed towards a special incentive system for renewable energies in Colombia. As a conclusion, in Colombia will be apply indirect incentives like low interest rate, taxes exemptions and so on. But these incentives are applied to limit the support of electricity productivity in generating organizations.

  5. Quantification of the equivalence principle

    International Nuclear Information System (INIS)

    Epstein, K.J.

    1978-01-01

    Quantitative relationships illustrate Einstein's equivalence principle, relating it to Newton's ''fictitious'' forces arising from the use of noninertial frames, and to the form of the relativistic time dilatation in local Lorentz frames. The equivalence principle can be interpreted as the equivalence of general covariance to local Lorentz covariance, in a manner which is characteristic of Riemannian and pseudo-Riemannian geometries

  6. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  7. A transdisciplinary approach to the initial validation of a single cell protein as an alternative protein source for use in aquafeeds

    Directory of Open Access Journals (Sweden)

    Michael Tlusty

    2017-04-01

    Full Text Available The human population is growing and, globally, we must meet the challenge of increased protein needs required to feed this population. Single cell proteins (SCP, when coupled to aquaculture production, offer a means to ensure future protein needs can be met without direct competition with food for people. To demonstrate a given type of SCP has potential as a protein source for use in aquaculture feed, a number of steps need to be validated including demonstrating that the SCP is accepted by the species in question, leads to equivalent survival and growth, does not result in illness or other maladies, is palatable to the consumer, is cost effective to produce and can easily be incorporated into diets using existing technology. Here we examine white shrimp (Litopenaeus vannamei growth and consumer taste preference, smallmouth grunt (Haemulon chrysargyreum growth, survival, health and gut microbiota, and Atlantic salmon (Salmo salar digestibility when fed diets that substitute the bacterium Methylobacterium extorquens at a level of 30% (grunts, 100% (shrimp, or 55% (salmon of the fishmeal in a compound feed. In each of these tests, animals performed equivalently when fed diets containing M. extorquens as when fed a standard aquaculture diet. This transdisciplinary approach is a first validation of this bacterium as a potential SCP protein substitute in aquafeeds. Given the ease to produce this SCP through an aerobic fermentation process, the broad applicability for use in aquaculture indicates the promise of M. extorquens in leading toward greater food security in the future.

  8. Distributed source term analysis, a new approach to nuclear material inventory verification

    CERN Document Server

    Beddingfield, D H

    2002-01-01

    The Distributed Source-Term Analysis (DSTA) technique is a new approach to measuring in-process material holdup that is a significant departure from traditional hold-up measurement methodology. The DSTA method is a means of determining the mass of nuclear material within a large, diffuse, volume using passive neutron counting. The DSTA method is a more efficient approach than traditional methods of holdup measurement and inventory verification. The time spent in performing DSTA measurement and analysis is a fraction of that required by traditional techniques. The error ascribed to a DSTA survey result is generally less than that from traditional methods. Also, the negative bias ascribed to gamma-ray methods is greatly diminished because the DSTA method uses neutrons which are more penetrating than gamma-rays.

  9. Distributed source term analysis, a new approach to nuclear material inventory verification

    International Nuclear Information System (INIS)

    Beddingfield, D.H.; Menlove, H.O.

    2002-01-01

    The Distributed Source-Term Analysis (DSTA) technique is a new approach to measuring in-process material holdup that is a significant departure from traditional hold-up measurement methodology. The DSTA method is a means of determining the mass of nuclear material within a large, diffuse, volume using passive neutron counting. The DSTA method is a more efficient approach than traditional methods of holdup measurement and inventory verification. The time spent in performing DSTA measurement and analysis is a fraction of that required by traditional techniques. The error ascribed to a DSTA survey result is generally less than that from traditional methods. Also, the negative bias ascribed to γ-ray methods is greatly diminished because the DSTA method uses neutrons which are more penetrating than γ-rays

  10. Meeting water needs for sustainable development: an overview of approaches, measures and data sources

    Science.gov (United States)

    Lissner, Tabea; Reusser, Dominik E.; Sullivan, Caroline A.; Kropp, Jürgen P.

    2013-04-01

    An essential part of a global transition towards sustainability is the Millennium Development Goals (MDG), providing a blueprint of goals to meet human needs. Water is an essential resource in itself, but also a vital factor of production for food, energy and other industrial products. Access to sufficient water has only recently been recognized as a human right. One central MDG is halving the population without access to safe drinking water and sanitation. To adequately assess the state of development and the potential for a transition towards sustainability, consistent and meaningful measures of water availability and adequate access are thus fundamental. Much work has been done to identify thresholds and definitions to measure water scarcity. This includes some work on defining basic water needs of different sectors. A range of data and approaches has been made available from a variety of sources, but all of these approaches differ in their underlying assumptions, the nature of the data used, and consequently in the final results. We review and compare approaches, methods and data sources on human water use and human water needs. This data review enables identifying levels of consumption in different countries and different sectors. Further comparison is made between actual water needs (based on human and ecological requirements), and recognised levels of water abstraction. The results of our review highlight the differences between different accounts of water use and needs, and reflect the importance of standardised approaches to data definitions and measurements, making studies more comparable across space and time. The comparison of different use and allocation patterns in countries enables levels of water use to be identified which allow for an adequate level of human wellbeing to be maintained within sustainable water abstraction limits. Recommendations are provided of how data can be defined more clearly to make comparisons of water use more meaningful and

  11. System equivalent model mixing

    Science.gov (United States)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  12. Identifying diffused nitrate sources in a stream in an agricultural field using a dual isotopic approach

    International Nuclear Information System (INIS)

    Ding, Jingtao; Xi, Beidou; Gao, Rutai; He, Liansheng; Liu, Hongliang; Dai, Xuanli; Yu, Yijun

    2014-01-01

    Nitrate (NO 3 − ) pollution is a severe problem in aquatic systems in Taihu Lake Basin in China. A dual isotope approach (δ 15 N-NO 3 − and δ 18 O-NO 3 − ) was applied to identify diffused NO 3 − inputs in a stream in an agricultural field at the basin in 2013. The site-specific isotopic characteristics of five NO 3 − sources (atmospheric deposition, AD; NO 3 − derived from soil organic matter nitrification, NS; NO 3 − derived from chemical fertilizer nitrification, NF; groundwater, GW; and manure and sewage, M and S) were identified. NO 3 − concentrations in the stream during the rainy season [mean ± standard deviation (SD) = 2.5 ± 0.4 mg/L] were lower than those during the dry season (mean ± SD = 4.0 ± 0.5 mg/L), whereas the δ 18 O-NO 3 − values during the rainy season (mean ± SD = + 12.3 ± 3.6‰) were higher than those during the dry season (mean ± SD = + 0.9 ± 1.9‰). Both chemical and isotopic characteristics indicated that mixing with atmospheric NO 3 − resulted in the high δ 18 O values during the rainy season, whereas NS and M and S were the dominant NO 3 − sources during the dry season. A Bayesian model was used to determine the contribution of each NO 3 − source to total stream NO 3 − . Results showed that reduced N nitrification in soil zones (including soil organic matter and fertilizer) was the main NO 3 − source throughout the year. M and S contributed more NO 3 − during the dry season (22.4%) than during the rainy season (17.8%). AD generated substantial amounts of NO 3 − in May (18.4%), June (29.8%), and July (24.5%). With the assessment of temporal variation of diffused NO 3 − sources in agricultural field, improved agricultural management practices can be implemented to protect the water resource and avoid further water quality deterioration in Taihu Lake Basin. - Highlights: • The isotopic characteristics of potential NO 3 − sources were identified. • Mixing with atmospheric NO 3 − resulted

  13. Source-water susceptibility assessment in Texas—Approach and methodology

    Science.gov (United States)

    Ulery, Randy L.; Meyer, John E.; Andren, Robert W.; Newson, Jeremy K.

    2011-01-01

    Public water systems provide potable water for the public's use. The Safe Drinking Water Act amendments of 1996 required States to prepare a source-water susceptibility assessment (SWSA) for each public water system (PWS). States were required to determine the source of water for each PWS, the origin of any contaminant of concern (COC) monitored or to be monitored, and the susceptibility of the public water system to COC exposure, to protect public water supplies from contamination. In Texas, the Texas Commission on Environmental Quality (TCEQ) was responsible for preparing SWSAs for the more than 6,000 public water systems, representing more than 18,000 surface-water intakes or groundwater wells. The U.S. Geological Survey (USGS) worked in cooperation with TCEQ to develop the Source Water Assessment Program (SWAP) approach and methodology. Texas' SWAP meets all requirements of the Safe Drinking Water Act and ultimately provides the TCEQ with a comprehensive tool for protection of public water systems from contamination by up to 247 individual COCs. TCEQ staff identified both the list of contaminants to be assessed and contaminant threshold values (THR) to be applied. COCs were chosen because they were regulated contaminants, were expected to become regulated contaminants in the near future, or were unregulated but thought to represent long-term health concerns. THRs were based on maximum contaminant levels from U.S. Environmental Protection Agency (EPA)'s National Primary Drinking Water Regulations. For reporting purposes, COCs were grouped into seven contaminant groups: inorganic compounds, volatile organic compounds, synthetic organic compounds, radiochemicals, disinfection byproducts, microbial organisms, and physical properties. Expanding on the TCEQ's definition of susceptibility, subject-matter expert working groups formulated the SWSA approach based on assumptions that natural processes and human activities contribute COCs in quantities that vary in space

  14. Improved radiological/nuclear source localization in variable NORM background: An MLEM approach with segmentation data

    Energy Technology Data Exchange (ETDEWEB)

    Penny, Robert D., E-mail: robert.d.penny@leidos.com [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Crowley, Tanya M.; Gardner, Barbara M.; Mandell, Myron J.; Guo, Yanlin; Haas, Eric B.; Knize, Duane J.; Kuharski, Robert A.; Ranta, Dale; Shyffer, Ryan [Leidos Inc., 10260 Campus Point Road, San Diego, CA (United States); Labov, Simon; Nelson, Karl; Seilhan, Brandon [Lawrence Livermore National Laboratory, Livermore, CA (United States); Valentine, John D. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2015-06-01

    A novel approach and algorithm have been developed to rapidly detect and localize both moving and static radiological/nuclear (R/N) sources from an airborne platform. Current aerial systems with radiological sensors are limited in their ability to compensate for variable naturally occurring radioactive material (NORM) background. The proposed approach suppresses the effects of NORM background by incorporating additional information to segment the survey area into regions over which the background is likely to be uniform. The method produces pixelated Source Activity Maps (SAMs) of both target and background radionuclide activity over the survey area. The task of producing the SAMs requires (1) the development of a forward model which describes the transformation of radionuclide activity to detector measurements and (2) the solution of the associated inverse problem. The inverse problem is ill-posed as there are typically fewer measurements than unknowns. In addition the measurements are subject to Poisson statistical noise. The Maximum-Likelihood Expectation-Maximization (MLEM) algorithm is used to solve the inverse problem as it is well suited for under-determined problems corrupted by Poisson noise. A priori terrain information is incorporated to segment the reconstruction space into regions within which we constrain NORM background activity to be uniform. Descriptions of the algorithm and examples of performance with and without segmentation on simulated data are presented.

  15. Double radio sources and the new approach to cosmical plasma physics

    International Nuclear Information System (INIS)

    Alfven, H.

    1978-01-01

    The methodology of cosmic plasma physics is discussed. It is hazardous to try to describe plasma phenomena by theories which have not been carefully tested experimentally. One present approach is to rely on laboratory measurements and in situ measurements in the magnetosphere and heliosphere, and to approach galactic phenomena by scaling up the wellknown phenomena to galactic dimensions. A summary is given of laboratory investigations of electric double layers, a phenomenon which is known to be very important in laboratory discharges. A summary is also given of the in situ measurements in the magnetosphere by which the importance of electric double layers in the Earth's surrounding is established. The scaling laws between laboratory and magnetospheric double layers are studied. The successful scaling between laboratory and magnetospheric phenomena encourages an extrapolation to heliospheric phenomena. A further extrapolation to galactic phenomena leads to a theory of double radio sources. In analogy with the Sun which, acting as a homopolar inductor, energizes the heliospheric current system, a rotating magnetized galaxy should produce a similar current system. From analogy with laboratory and magnetospheric current systems it is argued that the galactic current might produce double layers where a large energy dissipation takes place. This leads to a theory of the double radio sources which, within the necessary wide limits of uncertainty, is quantitatively reconcilable with observations. (Auth.)

  16. An Open Source-based Approach to the Development of Research Reactor Simulator

    International Nuclear Information System (INIS)

    Joo, Sung Moon; Suh, Yong Suk; Park, Cheol Park

    2016-01-01

    In reactor design, operator training, safety analysis, or research using a reactor, it is essential to simulate time dependent reactor behaviors such as neutron population, fluid flow, and heat transfer. Furthermore, in order to use the simulator to train and educate operators, a mockup of the reactor user interface is required. There are commercial software tools available for reactor simulator development. However, it is costly to use those commercial software tools. Especially for research reactors, it is difficult to justify the high cost as regulations on research reactor simulators are not as strict as those for commercial Nuclear Power Plants(NPPs). An open source-based simulator for a research reactor is configured as a distributed control system based on EPICS framework. To demonstrate the use of the simulation framework proposed in this work, we consider a toy example. This example approximates a 1-second impulse reactivity insertion in a reactor, which represents the instantaneous removal and reinsertion of a control rod. The change in reactivity results in a slightly delayed change in power and corresponding increases in temperatures throughout the system. We proposed an approach for developing research reactor simulator using open source software tools, and showed preliminary results. The results demonstrate that the approach presented in this work can provide economical and viable way of developing research reactor simulators

  17. A methodological approach to a realistic evaluation of skin absorbed doses during manipulation of radioactive sources by means of GAMOS Monte Carlo simulations

    Science.gov (United States)

    Italiano, Antonio; Amato, Ernesto; Auditore, Lucrezia; Baldari, Sergio

    2018-05-01

    The accurate evaluation of the radiation burden associated with radiation absorbed doses to the skin of the extremities during the manipulation of radioactive sources is a critical issue in operational radiological protection, deserving the most accurate calculation approaches available. Monte Carlo simulation of the radiation transport and interaction is the gold standard for the calculation of dose distributions in complex geometries and in presence of extended spectra of multi-radiation sources. We propose the use of Monte Carlo simulations in GAMOS, in order to accurately estimate the dose to the extremities during manipulation of radioactive sources. We report the results of these simulations for 90Y, 131I, 18F and 111In nuclides in water solutions enclosed in glass or plastic receptacles, such as vials or syringes. Skin equivalent doses at 70 μm of depth and dose-depth profiles are reported for different configurations, highlighting the importance of adopting a realistic geometrical configuration in order to get accurate dosimetric estimations. Due to the easiness of implementation of GAMOS simulations, case-specific geometries and nuclides can be adopted and results can be obtained in less than about ten minutes of computation time with a common workstation.

  18. Security analysis of an untrusted source for quantum key distribution: passive approach

    International Nuclear Information System (INIS)

    Zhao Yi; Qi Bing; Lo, H-K; Qian Li

    2010-01-01

    We present a passive approach to the security analysis of quantum key distribution (QKD) with an untrusted source. A complete proof of its unconditional security is also presented. This scheme has significant advantages in real-life implementations as it does not require fast optical switching or a quantum random number generator. The essential idea is to use a beam splitter to split each input pulse. We show that we can characterize the source using a cross-estimate technique without active routing of each pulse. We have derived analytical expressions for the passive estimation scheme. Moreover, using simulations, we have considered four real-life imperfections: additional loss introduced by the 'plug and play' structure, inefficiency of the intensity monitor noise of the intensity monitor, and statistical fluctuation introduced by finite data size. Our simulation results show that the passive estimate of an untrusted source remains useful in practice, despite these four imperfections. Also, we have performed preliminary experiments, confirming the utility of our proposal in real-life applications. Our proposal makes it possible to implement the 'plug and play' QKD with the security guaranteed, while keeping the implementation practical.

  19. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  20. Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology

    Science.gov (United States)

    Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob

    2014-01-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614

  1. Soluble salt sources in medieval porous limestone sculptures: A multi-isotope (N, O, S) approach

    Energy Technology Data Exchange (ETDEWEB)

    Kloppmann, W., E-mail: w.kloppmann@brgm.fr [BRGM, Direction des Laboratoires, Unité Isotopes, BP 6009, F-45060 Orléans cedex 2 (France); Rolland, O., E-mail: olivierrolland@wanadoo.fr [Montlouis-sur-Loire (France); Proust, E.; Montech, A.T. [BRGM, Direction des Laboratoires, Unité Isotopes, BP 6009, F-45060 Orléans cedex 2 (France)

    2014-02-01

    The sources and mechanisms of soluble salt uptake by porous limestone and the associated degradation patterns were investigated for the life-sized 15th century “entombment of Christ” sculpture group located in Pont-à-Mousson, France, using a multi-isotope approach on sulphates (δ{sup 34}S and δ{sup 18}O) and nitrates (δ{sup 15}N and δ{sup 18}O). The sculpture group, near the border of the Moselle River, is within the potential reach of capillary rise from the alluvial aquifer. Chemical analyses show a vertical zonation of soluble salts with a predominance of sulphates in the lower parts of the statues where crumbling and blistering prevail, and higher concentrations of nitrates and chloride in the high parts affected by powdering and efflorescence. Isotope fingerprints of sulphates suggest a triple origin: (1) the lower parts are dominated by capillary rise of dissolved sulphate from the Moselle water with characteristic Keuper evaporite signatures that progressively decreases with height; (2) in the higher parts affected by powdering the impact of atmospheric sulphur becomes detectable; and (3) locally, plaster reparations impact the neighbouring limestone through dissolution and re-precipitation of gypsum. Nitrogen and oxygen isotopes suggest an organic origin of nitrates in all samples. N isotope signatures are compatible with those measured in the alluvial aquifer of the Moselle River further downstream. This indicates contamination by sewage or organic fertilisers. Significant isotopic contrasts are observed between the different degradation features depending on the height and suggest historical changes of nitrate sources. - Highlights: • We use S, N and O isotopes to distinguish salt sources in limestone sculptures. • Vertical zonation of degradation is linked to capillary rise and air pollution. • Sulphate salts in lower parts are derived from river/groundwater. • Sulphate salts in higher parts show signature of air pollution. • Nitrates

  2. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    Energy Technology Data Exchange (ETDEWEB)

    AlRashidi, M.R., E-mail: malrash2002@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait); AlHajri, M.F., E-mail: mfalhajri@yahoo.com [Department of Electrical Engineering, College of Technological Studies, Public Authority for Applied Education and Training (PAAET) (Kuwait)

    2011-10-15

    Highlights: {yields} A new hybrid PSO for optimal DGs placement and sizing. {yields} Statistical analysis to fine tune PSO parameters. {yields} Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  3. Optimal planning of multiple distributed generation sources in distribution networks: A new approach

    International Nuclear Information System (INIS)

    AlRashidi, M.R.; AlHajri, M.F.

    2011-01-01

    Highlights: → A new hybrid PSO for optimal DGs placement and sizing. → Statistical analysis to fine tune PSO parameters. → Novel constraint handling mechanism to handle different constraints types. - Abstract: An improved particle swarm optimization algorithm (PSO) is presented for optimal planning of multiple distributed generation sources (DG). This problem can be divided into two sub-problems: the DG optimal size (continuous optimization) and location (discrete optimization) to minimize real power losses. The proposed approach addresses the two sub-problems simultaneously using an enhanced PSO algorithm capable of handling multiple DG planning in a single run. A design of experiment is used to fine tune the proposed approach via proper analysis of PSO parameters interaction. The proposed algorithm treats the problem constraints differently by adopting a radial power flow algorithm to satisfy the equality constraints, i.e. power flows in distribution networks, while the inequality constraints are handled by making use of some of the PSO features. The proposed algorithm was tested on the practical 69-bus power distribution system. Different test cases were considered to validate the proposed approach consistency in detecting optimal or near optimal solution. Results are compared with those of Sequential Quadratic Programming.

  4. Logically automorphically equivalent knowledge bases

    OpenAIRE

    Aladova, Elena; Plotkin, Tatjana

    2017-01-01

    Knowledge bases theory provide an important example of the field where applications of universal algebra and algebraic logic look very natural, and their interaction with practical problems arising in computer science might be very productive. In this paper we study the equivalence problem for knowledge bases. Our interest is to find out how the informational equivalence is related to the logical description of knowledge. Studying various equivalences of knowledge bases allows us to compare d...

  5. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  6. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    Science.gov (United States)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to

  7. Sources of Life Strengths Appraisal Scale: A Multidimensional Approach to Assessing Older Adults’ Perceived Sources of Life Strengths

    Directory of Open Access Journals (Sweden)

    Prem S. Fry

    2014-01-01

    Full Text Available Both cognitive and psychosocial theories of adult development stress the fundamental role of older adults’ appraisals of the diverse sources of cognitive and social-emotional strengths. This study reports the development of a new self-appraisal measure that incorporates key theoretical dimensions of internal and external sources of life strengths, as identified in the gerontological literature. Using a pilot study sample and three other independent samples to examine older adults’ appraisals of their sources of life strengths which helped them in their daily functioning and to combat life challenges, adversity, and losses, a psychometric instrument having appropriate reliability and validity properties was developed. A 24-month followup of a randomly selected sample confirmed that the nine-scale appraisal measure (SLSAS is a promising instrument for appraising older adults’ sources of life strengths in dealing with stresses of daily life’s functioning and also a robust measure for predicting outcomes of resilience, autonomy, and well-being for this age group. A unique strength of the appraisal instrument is its critically relevant features of brevity, simplicity of language, and ease of administration to frail older adults.

  8. A Multiple Source Approach to Organisational Justice: The Role of the Organisation, Supervisors, Coworkers, and Customers

    Directory of Open Access Journals (Sweden)

    Agustin Molina

    2015-07-01

    Full Text Available The vast research on organisational justice has focused on the organisation and the supervisor. This study aims to further this line of research by integrating two trends within organisational justice research: the overall approach to justice perceptions and the multifoci perspective of justice judgments. Specifically, this study aims to explore the effects of two additional sources of justice, coworker-focused justice and customer-focused justice, on relevant employees’ outcomes—burnout, turnover intentions, job satisfaction, and workplace deviance— while controlling the effect of organisation-focused justice and supervisor-focused justice. Given the increased importance attributed to coworkers and customers, we expect coworker-focused justice and customer-focused justice to explain incremental variance in the measured outcomes, above and beyond the effects of organisation-focused justice and supervisor-focused justice. Participants will be university students from Austria and Germany employed by service organisations. Data analysis will be conducted using structural equation modeling.

  9. A Stigmergy Collaboration Approach in the Open Source Software Developer Community

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Pullum, Laura L [ORNL; Treadwell, Jim N [ORNL; Potok, Thomas E [ORNL

    2009-01-01

    The communication model of some self-organized online communities is significantly different from the traditional social network based community. It is problematic to use social network analysis to analyze the collaboration structure and emergent behaviors in these communities because these communities lack peer-to-peer connections. Stigmergy theory provides an explanation of the collaboration model of these communities. In this research, we present a stigmergy approach for building an agent-based simulation to simulate the collaboration model in the open source software (OSS) developer community. We used a group of actors who collaborate on OSS projects through forums as our frame of reference and investigated how the choices actors make in contributing their work on the projects determines the global status of the whole OSS project. In our simulation, the forum posts serve as the digital pheromone and the modified Pierre-Paul Grasse pheromone model is used for computing the developer agents behavior selection probability.

  10. The Sources of Efficiency of the Nigerian Banking Industry: A Two- Stage Approach

    Directory of Open Access Journals (Sweden)

    Frances Obafemi

    2013-11-01

    Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.

  11. Time-variant reliability assessment through equivalent stochastic process transformation

    International Nuclear Information System (INIS)

    Wang, Zequn; Chen, Wei

    2016-01-01

    Time-variant reliability measures the probability that an engineering system successfully performs intended functions over a certain period of time under various sources of uncertainty. In practice, it is computationally prohibitive to propagate uncertainty in time-variant reliability assessment based on expensive or complex numerical models. This paper presents an equivalent stochastic process transformation approach for cost-effective prediction of reliability deterioration over the life cycle of an engineering system. To reduce the high dimensionality, a time-independent reliability model is developed by translating random processes and time parameters into random parameters in order to equivalently cover all potential failures that may occur during the time interval of interest. With the time-independent reliability model, an instantaneous failure surface is attained by using a Kriging-based surrogate model to identify all potential failure events. To enhance the efficacy of failure surface identification, a maximum confidence enhancement method is utilized to update the Kriging model sequentially. Then, the time-variant reliability is approximated using Monte Carlo simulations of the Kriging model where system failures over a time interval are predicted by the instantaneous failure surface. The results of two case studies demonstrate that the proposed approach is able to accurately predict the time evolution of system reliability while requiring much less computational efforts compared with the existing analytical approach. - Highlights: • Developed a new approach for time-variant reliability analysis. • Proposed a novel stochastic process transformation procedure to reduce the dimensionality. • Employed Kriging models with confidence-based adaptive sampling scheme to enhance computational efficiency. • The approach is effective for handling random process in time-variant reliability analysis. • Two case studies are used to demonstrate the efficacy

  12. Equivalence of Lagrangian and Hamiltonian BRST quantizations

    International Nuclear Information System (INIS)

    Grigoryan, G.V.; Grigoryan, R.P.; Tyutin, I.V.

    1992-01-01

    Two approaches to the quantization of gauge theories using BRST symmetry are widely used nowadays: the Lagrangian quantization, developed in (BV-quantization) and Hamiltonian quantization, formulated in (BFV-quantization). For all known examples of field theory (Yang-Mills theory, gravitation etc.) both schemes give equivalent results. However the equivalence of these approaches in general wasn't proved. The main obstacle in comparing of these formulations consists in the fact, that in Hamiltonian approach the number of ghost fields is equal to the number of all first-class constraints, while in the Lagrangian approach the number of ghosts is equal to the number of independent gauge symmetries, which is equal to the number of primary first-class constraints only. This paper is devoted to the proof of the equivalence of Lagrangian and Hamiltonian quantizations for the systems with first-class constraints only. This is achieved by a choice of special gauge in the Hamiltonian approach. It's shown, that after integration over redundant variables on the functional integral we come to effective action which is constructed according to rules for construction of the effective action in Lagrangian quantization scheme

  13. Detecting and analyzing soil phosphorus loss associated with critical source areas using a remote sensing approach.

    Science.gov (United States)

    Lou, Hezhen; Yang, Shengtian; Zhao, Changsen; Shi, Liuhua; Wu, Linna; Wang, Yue; Wang, Zhiwei

    2016-12-15

    The detection of critical source areas (CSAs) is a key step in managing soil phosphorus (P) loss and preventing the long-term eutrophication of water bodies at regional scale. Most related studies, however, focus on a local scale, which prevents a clear understanding of the spatial distribution of CSAs for soil P loss at regional scale. Moreover, the continual, long-term variation in CSAs was scarcely reported. It is impossible to identify the factors driving the variation in CSAs, or to collect land surface information essential for CSAs detection, by merely using the conventional methodologies at regional scale. This study proposes a new regional-scale approach, based on three satellite sensors (ASTER, TM/ETM and MODIS), that were implemented successfully to detect CSAs at regional scale over 15years (2000-2014). The approach incorporated five factors (precipitation, slope, soil erosion, land use, soil total phosphorus) that drive soil P loss from CSAs. Results show that the average area of critical phosphorus source areas (CPSAs) was 15,056km 2 over the 15-year period, and it occupied 13.8% of the total area, with a range varying from 1.2% to 23.0%, in a representative, intensive agricultural area of China. In contrast to previous studies, we found that the locations of CSAs with P loss are spatially variable, and are more dispersed in their distribution over the long term. We also found that precipitation acts as a key driving factor in the variation of CSAs at regional scale. The regional-scale method can provide scientific guidance for managing soil phosphorus loss and preventing the long-term eutrophication of water bodies at regional scale, and shows great potential for exploring factors that drive the variation in CSAs at global scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Derivation of Accident-Specific Material-at-Risk Equivalency Factors

    Energy Technology Data Exchange (ETDEWEB)

    Jason P. Andrus; Dr. Chad L. Pope

    2012-05-01

    A novel method for calculating material at risk (MAR) dose equivalency developed at the Idaho National Laboratory (INL) now allows for increased utilization of dose equivalency for facility MAR control. This method involves near-real time accounting for the use of accident and material specific release and transport. It utilizes all information from the committed effective dose equation and the five factor source term equation to derive dose equivalency factors which can be used to establish an overall facility or process MAR limit. The equivalency factors allow different nuclide spectrums to be compared for their respective dose consequences by relating them to a specific quantity of an identified reference nuclide. The ability to compare spectrums to a reference limit ensures that MAR limits are in fact bounding instead of attempting to establish a representative or bounding spectrum which may lead to unintended or unanalyzed configurations. This methodology is then coupled with a near real time material tracking system which allows for accurate and timely material composition information and corresponding MAR equivalency values. The development of this approach was driven by the complex nature of processing operations in some INL facilities. This type of approach is ideally suited for facilities and processes where the composition of the MAR and possible release mechanisms change frequently but in well defined fashions and in a batch-type nature.

  15. Vocal individuality cues in the African penguin (Spheniscus demersus): a source-filter theory approach.

    Science.gov (United States)

    Favaro, Livio; Gamba, Marco; Alfieri, Chiara; Pessani, Daniela; McElligott, Alan G

    2015-11-25

    The African penguin is a nesting seabird endemic to southern Africa. In penguins of the genus Spheniscus vocalisations are important for social recognition. However, it is not clear which acoustic features of calls can encode individual identity information. We recorded contact calls and ecstatic display songs of 12 adult birds from a captive colony. For each vocalisation, we measured 31 spectral and temporal acoustic parameters related to both source and filter components of calls. For each parameter, we calculated the Potential of Individual Coding (PIC). The acoustic parameters showing PIC ≥ 1.1 were used to perform a stepwise cross-validated discriminant function analysis (DFA). The DFA correctly classified 66.1% of the contact calls and 62.5% of display songs to the correct individual. The DFA also resulted in the further selection of 10 acoustic features for contact calls and 9 for display songs that were important for vocal individuality. Our results suggest that studying the anatomical constraints that influence nesting penguin vocalisations from a source-filter perspective, can lead to a much better understanding of the acoustic cues of individuality contained in their calls. This approach could be further extended to study and understand vocal communication in other bird species.

  16. A Practice Approach of Multi-source Geospatial Data Integration for Web-based Geoinformation Services

    Science.gov (United States)

    Huang, W.; Jiang, J.; Zha, Z.; Zhang, H.; Wang, C.; Zhang, J.

    2014-04-01

    Geospatial data resources are the foundation of the construction of geo portal which is designed to provide online geoinformation services for the government, enterprise and public. It is vital to keep geospatial data fresh, accurate and comprehensive in order to satisfy the requirements of application and development of geographic location, route navigation, geo search and so on. One of the major problems we are facing is data acquisition. For us, integrating multi-sources geospatial data is the mainly means of data acquisition. This paper introduced a practice integration approach of multi-source geospatial data with different data model, structure and format, which provided the construction of National Geospatial Information Service Platform of China (NGISP) with effective technical supports. NGISP is the China's official geo portal which provides online geoinformation services based on internet, e-government network and classified network. Within the NGISP architecture, there are three kinds of nodes: national, provincial and municipal. Therefore, the geospatial data is from these nodes and the different datasets are heterogeneous. According to the results of analysis of the heterogeneous datasets, the first thing we do is to define the basic principles of data fusion, including following aspects: 1. location precision; 2.geometric representation; 3. up-to-date state; 4. attribute values; and 5. spatial relationship. Then the technical procedure is researched and the method that used to process different categories of features such as road, railway, boundary, river, settlement and building is proposed based on the principles. A case study in Jiangsu province demonstrated the applicability of the principle, procedure and method of multi-source geospatial data integration.

  17. An approach for evaluating the effects of source separation on municipal solid waste management

    Energy Technology Data Exchange (ETDEWEB)

    Tanskanen, J.H. [Finnish Environment Institute, Helsinki (Finland)

    2000-07-01

    An approach was developed for integrated analysis of recovery rates, waste streams, costs and emissions of municipal solid waste management (MSWM). The approach differs from most earlier models used in the strategic planning of MSWM because of a comprehensive analysis of on-site collection systems of waste materials separated at source for recovery. As a result, the recovery rates and sizes of waste streams can be calculated on the basis of the characteristics of separation strategies instead of giving them as input data. The modelling concept developed can also be applied in other regions, municipalities and districts. This thesis consists of four case studies. Three of these were performed to test the approach developed and to evaluate the effects of separation on MSWM in Finland. In these case studies the approach was applied for modelling: (1) Finland's national separation strategy for municipal solid waste, (2) the effects of separation on MSWM systems in the Helsinki region and (3) the efficiency of various waste collection methods in the Helsinki region. The models developed for these three case studies are static and linear simulation models which were constructed in the format of an Excel spreadsheet. In addition, a new version of the original Swedish MIMES/Waste model was constructed and applied in one of the case studies. The case studies proved that the approach is an applicable tool for various research settings and circumstances in the strategic planning of MSWM. The following main results were obtained from the case studies: A high recovery rate level (around 70 %wt) can be achieved in MSWM without incineration; Central sorting of mixed waste must be included in Finland's national separation strategy in order to reach the recovery rate targets of 50 %wt (year 2000) and 70 %wt (year 2005) adopted for municipal solid waste in the National Waste Plan. The feasible source separation strategies result in recovery rates around 35-40 %wt with the

  18. A new approach for calculation of volume confined by ECR surface and its area in ECR ion source

    International Nuclear Information System (INIS)

    Filippov, A.V.

    2007-01-01

    The volume confined by the resonance surface and its area are important parameters of the balance equations model for calculation of ion charge-state distribution (CSD) in the electron-cyclotron resonance (ECR) ion source. A new approach for calculation of these parameters is given. This approach allows one to reduce the number of parameters in the balance equations model

  19. Inverse kinetics method with source term for subcriticality measurements during criticality approach in the IPEN/MB-01 research reactor

    International Nuclear Information System (INIS)

    Loureiro, Cesar Augusto Domingues; Santos, Adimir dos

    2009-01-01

    In reactor physics tests which are performed at the startup after refueling the commercial PWRs, it is important to monitor subcriticality continuously during criticality approach. Reactivity measurements by the inverse kinetics method are widely used during the operation of a nuclear reactor and it is possible to perform an online reactivity measurement based on the point reactor kinetics equations. This technique is successful applied at sufficiently high power level or to a core without an external neutron source where the neutron source term in point reactor kinetics equations may be neglected. For operation at low power levels, the contribution of the neutron source must be taken into account and this implies the knowledge of a quantity proportional to the source strength, and then it should be determined. Some experiments have been performed in the IPEN/MB-01 Research Reactor for the determination of the Source Term, using the Least Square Inverse Kinetics Method (LSIKM). A digital reactivity meter which neglects the source term is used to calculate the reactivity and then the source term can be determined by the LSIKM. After determining the source term, its value can be added to the algorithm and the reactivity can be determined again, considering the source term. The new digital reactivity meter can be used now to monitor reactivity during the criticality approach and the measured value for the reactivity is more precise than the meter which neglects the source term. (author)

  20. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification

    Directory of Open Access Journals (Sweden)

    Sourabh De

    2011-01-01

    Full Text Available For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV. This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  1. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification.

    Science.gov (United States)

    De, Sourabh

    2011-07-01

    For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV). This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  2. SAPONIFICATION EQUIVALENT OF DASAMULA TAILA

    OpenAIRE

    Saxena, R. B.

    1994-01-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  3. Saponification equivalent of dasamula taila.

    Science.gov (United States)

    Saxena, R B

    1994-07-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  4. A study on lead equivalent

    International Nuclear Information System (INIS)

    Lin Guanxin

    1991-01-01

    A study on the rules in which the lead equivalent of lead glass changes with the energy of X rays or γ ray is described. The reason of this change is discussed and a new testing method of lead equivalent is suggested

  5. A Systematic Approach to Identify Sources of Abnormal Interior Noise for a High-Speed Train

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2018-01-01

    Full Text Available A systematic approach to identify sources of abnormal interior noise occurring in a high-speed train is presented and applied in this paper to resolve a particular noise issue. This approach is developed based on a number of previous dealings with similar noise problems. The particular noise issue occurs in a Chinese high-speed train. It is measured that there is a difference of 7 dB(A in overall Sound Pressure Level (SPL between two nominally identical VIP cabins at 250 km/h. The systematic approach is applied to identify the root cause of the 7 dB(A difference. Well planned measurements are performed in both the VIP cabins. Sound pressure contributions, either in terms of frequency band or in terms of facing area, are analyzed. Order analysis is also carried out. Based on these analyses, it is found that the problematic frequency is the sleeper passing frequency of the train, and an area on the roof contributes the most. In order to determine what causes that area to be the main contributor without disassembling the structure of the roof, measured noise and vibration data for different train speeds are further analyzed. It is then reasoned that roof is the main contributor caused by sound pressure behind the panel. Up to this point, panels of the roof are removed, revealing that a hole of 300 cm2 for running cables is presented behind the red area without proper sound insulation. This study can provide a basis for abnormal interior noise analysis and control of high-speed trains.

  6. Characteristics of natural background external radiation and effective dose equivalent

    International Nuclear Information System (INIS)

    Fujimoto, Kenzo

    1989-01-01

    The two sources of natural radiation - cosmic rays and primordial radionuclides - are described. The factors affecting radiation doses received from natural radiation and the calculation of effective dose equivalent due to natural radiation are discussed. 10 figs., 3 tabs

  7. Outbreaks source: A new mathematical approach to identify their possible location

    Science.gov (United States)

    Buscema, Massimo; Grossi, Enzo; Breda, Marco; Jefferson, Tom

    2009-11-01

    Classical epidemiology has generally relied on the description and explanation of the occurrence of infectious diseases in relation to time occurrence of events rather than to place of occurrence. In recent times, computer generated dot maps have facilitated the modeling of the spread of infectious epidemic diseases either with classical statistics approaches or with artificial “intelligent systems”. Few attempts, however, have been made so far to identify the origin of the epidemic spread rather than its evolution by mathematical topology methods. We report on the use of a new artificial intelligence method (the H-PST Algorithm) and we compare this new technique with other well known algorithms to identify the source of three examples of infectious disease outbreaks derived from literature. The H-PST algorithm is a new system able to project a distances matrix of points (events) into a bi-dimensional space, with the generation of a new point, named hidden unit. This new hidden unit deforms the original Euclidean space and transforms it into a new space (cognitive space). The cost function of this transformation is the minimization of the differences between the original distance matrix among the assigned points and the distance matrix of the same points projected into the bi-dimensional map (or any different set of constraints). For many reasons we will discuss, the position of the hidden unit shows to target the outbreak source in many epidemics much better than the other classic algorithms specifically targeted for this task. Compared with main algorithms known in the location theory, the hidden unit was within yards of the outbreak source in the first example (the 2007 epidemic of Chikungunya fever in Italy). The hidden unit was located in the river between the two village epicentres of the spread exactly where the index case was living. Equally in the second (the 1967 foot and mouth disease epidemic in England), and the third (1854 London Cholera epidemic

  8. Determination of dose equivalent with tissue-equivalent proportional counters

    International Nuclear Information System (INIS)

    Dietze, G.; Schuhmacher, H.; Menzel, H.G.

    1989-01-01

    Low pressure tissue-equivalent proportional counters (TEPC) are instruments based on the cavity chamber principle and provide spectral information on the energy loss of single charged particles crossing the cavity. Hence such detectors measure absorbed dose or kerma and are able to provide estimates on radiation quality. During recent years TEPC based instruments have been developed for radiation protection applications in photon and neutron fields. This was mainly based on the expectation that the energy dependence of their dose equivalent response is smaller than that of other instruments in use. Recently, such instruments have been investigated by intercomparison measurements in various neutron and photon fields. Although their principles of measurements are more closely related to the definition of dose equivalent quantities than those of other existing dosemeters, there are distinct differences and limitations with respect to the irradiation geometry and the determination of the quality factor. The application of such instruments for measuring ambient dose equivalent is discussed. (author)

  9. Objective approach for analysis of noise source characteristics and acoustic conditions in noisy computerized embroidery workrooms.

    Science.gov (United States)

    Aliabadi, Mohsen; Golmohammadi, Rostam; Mansoorizadeh, Muharram

    2014-03-01

    It is highly important to analyze the acoustic properties of workrooms in order to identify best noise control measures from the standpoint of noise exposure limits. Due to the fact that sound pressure is dependent upon environments, it cannot be a suitable parameter for determining the share of workroom acoustic characteristics in producing noise pollution. This paper aims to empirically analyze noise source characteristics and acoustic properties of noisy embroidery workrooms based on special parameters. In this regard, reverberation time as the special room acoustic parameter in 30 workrooms was measured based on ISO 3382-2. Sound power quantity of embroidery machines was also determined based on ISO 9614-3. Multiple linear regression was employed for predicting reverberation time based on acoustic features of the workrooms using MATLAB software. The results showed that the measured reverberation times in most of the workrooms were approximately within the ranges recommended by ISO 11690-1. Similarity between reverberation time values calculated by the Sabine formula and measured values was relatively poor (R (2) = 0.39). This can be due to the inaccurate estimation of the acoustic influence of furniture and formula preconditions. Therefore, this value cannot be considered representative of an actual acoustic room. However, the prediction performance of the regression method with root mean square error (RMSE) = 0.23 s and R (2) = 0.69 is relatively acceptable. Because the sound power of the embroidery machines was relatively high, these sources get the highest priority when it comes to applying noise controls. Finally, an objective approach for the determination of the share of workroom acoustic characteristics in producing noise could facilitate the identification of cost-effective noise controls.

  10. A multi-source precipitation approach to fill gaps over a radar precipitation field

    Science.gov (United States)

    Tesfagiorgis, K. B.; Mahani, S. E.; Khanbilvardi, R.

    2012-12-01

    Satellite Precipitation Estimates (SPEs) may be the only available source of information for operational hydrologic and flash flood prediction due to spatial limitations of radar and gauge products. The present work develops an approach to seamlessly blend satellite, radar, climatological and gauge precipitation products to fill gaps over ground-based radar precipitation fields. To mix different precipitation products, the bias of any of the products relative to each other should be removed. For bias correction, the study used an ensemble-based method which aims to estimate spatially varying multiplicative biases in SPEs using a radar rainfall product. Bias factors were calculated for a randomly selected sample of rainy pixels in the study area. Spatial fields of estimated bias were generated taking into account spatial variation and random errors in the sampled values. A weighted Successive Correction Method (SCM) is proposed to make the merging between error corrected satellite and radar rainfall estimates. In addition to SCM, we use a Bayesian spatial method for merging the gap free radar with rain gauges, climatological rainfall sources and SPEs. We demonstrate the method using SPE Hydro-Estimator (HE), radar- based Stage-II, a climatological product PRISM and rain gauge dataset for several rain events from 2006 to 2008 over three different geographical locations of the United States. Results show that: the SCM method in combination with the Bayesian spatial model produced a precipitation product in good agreement with independent measurements. The study implies that using the available radar pixels surrounding the gap area, rain gauge, PRISM and satellite products, a radar like product is achievable over radar gap areas that benefits the scientific community.

  11. Solar Power Satellites: Reconsideration as Renewable Energy Source Based on Novel Approaches

    Science.gov (United States)

    Ellery, Alex

    2017-04-01

    Solar power satellites (SPS) are a solar energy generation mechanism that captures solar energy in space and converts this energy into microwave for transmission to Earth-based rectenna arrays. They offer a constant, high integrated energy density of 200 W/m2 compared to <10 W/m2 for other renewable energy sources. Despite this promise as a clean energy source, SPS have been relegated out of consideration due to their enormous cost and technological challenge. It has been suggested that for solar power satellites to become economically feasible, launch costs must decrease from their current 20,000/kg to <200/kg. Even with the advent of single-stage-to-orbit launchers which propose launch costs dropping to 2,000/kg, this will not be realized. Yet, the advantages of solar power satellites are many including the provision of stable baseload power. Here, I present a novel approach to reduce the specific cost of solar power satellites to 1/kg by leveraging two enabling technologies - in-situ resource utilization of lunar material and 3D printing of this material. Specifically, we demonstrate that electric motors may be constructed from lunar material through 3D printing representing a major step towards the development of self-replicating machines. Such machines have the capacity to build solar power satellites on the Moon, thereby bypassing the launch cost problem. The productive capacity of self-replicating machines favours the adoption of large constellations of small solar power satellites. This opens up additional clean energy options for combating climate change by meeting the demands for future global energy.

  12. Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal

    Science.gov (United States)

    Wronna, M.; Omira, R.; Baptista, M. A.

    2015-11-01

    In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2.

  13. The Treatment Train approach to reducing non-point source pollution from agriculture

    Science.gov (United States)

    Barber, N.; Reaney, S. M.; Barker, P. A.; Benskin, C.; Burke, S.; Cleasby, W.; Haygarth, P.; Jonczyk, J. C.; Owen, G. J.; Snell, M. A.; Surridge, B.; Quinn, P. F.

    2016-12-01

    An experimental approach has been applied to an agricultural catchment in NW England, where non-point pollution adversely affects freshwater ecology. The aim of the work (as part of the River Eden Demonstration Test Catchment project) is to develop techniques to manage agricultural runoff whilst maintaining food production. The approach used is the Treatment Train (TT), which applies multiple connected mitigation options that control nutrient and fine sediment pollution at source, and address polluted runoff pathways at increasing spatial scale. The principal agricultural practices in the study sub-catchment (1.5 km2) are dairy and stock production. Farm yards can act as significant pollution sources by housing large numbers of animals; these areas are addressed initially with infrastructure improvements e.g. clean/dirty water separation and upgraded waste storage. In-stream high resolution monitoring of hydrology and water quality parameters showed high-discharge events to account for the majority of pollutant exports ( 80% total phosphorus; 95% fine sediment), and primary transfer routes to be surface and shallow sub-surface flow pathways, including drains. To manage these pathways and reduce hydrological connectivity, a series of mitigation features were constructed to intercept and temporarily store runoff. Farm tracks, field drains, first order ditches and overland flow pathways were all targeted. The efficacy of the mitigation features has been monitored at event and annual scale, using inflow-outflow sampling and sediment/nutrient accumulation measurements, respectively. Data presented here show varied but positive results in terms of reducing acute and chronic sediment and nutrient losses. An aerial fly-through of the catchment is used to demonstrate how the TT has been applied to a fully-functioning agricultural landscape. The elevated perspective provides a better understanding of the spatial arrangement of mitigation features, and how they can be

  14. Physics-based approach to chemical source localization using mobile robotic swarms

    Science.gov (United States)

    Zarzhitsky, Dimitri

    2008-07-01

    Recently, distributed computation has assumed a dominant role in the fields of artificial intelligence and robotics. To improve system performance, engineers are combining multiple cooperating robots into cohesive collectives called swarms. This thesis illustrates the application of basic principles of physicomimetics, or physics-based design, to swarm robotic systems. Such principles include decentralized control, short-range sensing and low power consumption. We show how the application of these principles to robotic swarms results in highly scalable, robust, and adaptive multi-robot systems. The emergence of these valuable properties can be predicted with the help of well-developed theoretical methods. In this research effort, we have designed and constructed a distributed physicomimetics system for locating sources of airborne chemical plumes. This task, called chemical plume tracing (CPT), is receiving a great deal of attention due to persistent homeland security threats. For this thesis, we have created a novel CPT algorithm called fluxotaxis that is based on theoretical principles of fluid dynamics. Analytically, we show that fluxotaxis combines the essence, as well as the strengths, of the two most popular biologically-inspired CPT methods-- chemotaxis and anemotaxis. The chemotaxis strategy consists of navigating in the direction of the chemical density gradient within the plume, while the anemotaxis approach is based on an upwind traversal of the chemical cloud. Rigorous and extensive experimental evaluations have been performed in simulated chemical plume environments. Using a suite of performance metrics that capture the salient aspects of swarm-specific behavior, we have been able to evaluate and compare the three CPT algorithms. We demonstrate the improved performance of our fluxotaxis approach over both chemotaxis and anemotaxis in these realistic simulation environments, which include obstacles. To test our understanding of CPT on actual hardware

  15. New Statistical Approach to Apportion Dietary Sources of Iodine Intake: Findings from Kenya, Senegal and India

    Directory of Open Access Journals (Sweden)

    Frits van der Haar

    2018-03-01

    Full Text Available Progress of national Universal Salt Iodization (USI strategies is typically assessed by household coverage of adequately iodized salt and median urinary iodine concentration (UIC in spot urine collections. However, household coverage does not inform on the iodized salt used in preparation of processed foods outside homes, nor does the total UIC reflect the portion of population iodine intake attributable to the USI strategy. This study used data from three population-representative surveys of women of reproductive age (WRA in Kenya, Senegal and India to develop and illustrate a new approach to apportion the population UIC levels by the principal dietary sources of iodine intake, namely native iodine, iodine in processed food salt and iodine in household salt. The technique requires measurement of urinary sodium concentrations (UNaC in the same spot urine samples collected for iodine status assessment. Taking into account the different complex survey designs of each survey, generalized linear regression (GLR analyses were performed in which the UIC data of WRA was set as the outcome variable that depends on their UNaC and household salt iodine (SI data as explanatory variables. Estimates of the UIC portions that correspond to iodine intake sources were calculated with use of the intercept and regression coefficients for the UNaC and SI variables in each country’s regression equation. GLR coefficients for UNaC and SI were significant in all country-specific models. Rural location did not show a significant association in any country when controlled for other explanatory variables. The estimated UIC portion from native dietary iodine intake in each country fell below the minimum threshold for iodine sufficiency. The UIC portion arising from processed food salt in Kenya was substantially higher than in Senegal and India, while the UIC portions from household salt use varied in accordance with the mean level of household SI content in the country

  16. New Statistical Approach to Apportion Dietary Sources of Iodine Intake: Findings from Kenya, Senegal and India

    Science.gov (United States)

    Knowles, Jacky; Bukania, Zipporah; Camara, Boubacar; Pandav, Chandrakant S.; Mwai, John Maina; Toure, Ndeye Khady; Yadav, Kapil

    2018-01-01

    Progress of national Universal Salt Iodization (USI) strategies is typically assessed by household coverage of adequately iodized salt and median urinary iodine concentration (UIC) in spot urine collections. However, household coverage does not inform on the iodized salt used in preparation of processed foods outside homes, nor does the total UIC reflect the portion of population iodine intake attributable to the USI strategy. This study used data from three population-representative surveys of women of reproductive age (WRA) in Kenya, Senegal and India to develop and illustrate a new approach to apportion the population UIC levels by the principal dietary sources of iodine intake, namely native iodine, iodine in processed food salt and iodine in household salt. The technique requires measurement of urinary sodium concentrations (UNaC) in the same spot urine samples collected for iodine status assessment. Taking into account the different complex survey designs of each survey, generalized linear regression (GLR) analyses were performed in which the UIC data of WRA was set as the outcome variable that depends on their UNaC and household salt iodine (SI) data as explanatory variables. Estimates of the UIC portions that correspond to iodine intake sources were calculated with use of the intercept and regression coefficients for the UNaC and SI variables in each country’s regression equation. GLR coefficients for UNaC and SI were significant in all country-specific models. Rural location did not show a significant association in any country when controlled for other explanatory variables. The estimated UIC portion from native dietary iodine intake in each country fell below the minimum threshold for iodine sufficiency. The UIC portion arising from processed food salt in Kenya was substantially higher than in Senegal and India, while the UIC portions from household salt use varied in accordance with the mean level of household SI content in the country surveys. The

  17. What is correct: equivalent dose or dose equivalent

    International Nuclear Information System (INIS)

    Franic, Z.

    1994-01-01

    In Croatian language some physical quantities in radiation protection dosimetry have not precise names. Consequently, in practice either terms in English or mathematical formulas are used. The situation is even worse since the Croatian language only a limited number of textbooks, reference books and other papers are available. This paper compares the concept of ''dose equivalent'' as outlined in International Commission on Radiological Protection (ICRP) recommendations No. 26 and newest, conceptually different concept of ''equivalent dose'' which is introduced in ICRP 60. It was found out that Croatian terminology is both not uniform and unprecise. For the term ''dose equivalent'' was, under influence of Russian and Serbian languages, often used as term ''equivalent dose'' even from the point of view of ICRP 26 recommendations, which was not justified. Unfortunately, even now, in Croatia the legal unit still ''dose equivalent'' defined as in ICRP 26, but the term used for it is ''equivalent dose''. Therefore, in Croatian legislation a modified set of quantities introduced in ICRP 60, should be incorporated as soon as possible

  18. A mixing-model approach to quantifying sources of organic matter to salt marsh sediments

    Science.gov (United States)

    Bowles, K. M.; Meile, C. D.

    2010-12-01

    Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes

  19. Clinical Application of an Open-Source 3D Volume Rendering Software to Neurosurgical Approaches.

    Science.gov (United States)

    Fernandes de Oliveira Santos, Bruno; Silva da Costa, Marcos Devanir; Centeno, Ricardo Silva; Cavalheiro, Sergio; Antônio de Paiva Neto, Manoel; Lawton, Michael T; Chaddad-Neto, Feres

    2018-02-01

    Preoperative recognition of the anatomic individualities of each patient can help to achieve more precise and less invasive approaches. It also may help to anticipate potential complications and intraoperative difficulties. Here we describe the use, accuracy, and precision of a free tool for planning microsurgical approaches using 3-dimensional (3D) reconstructions from magnetic resonance imaging (MRI). We used the 3D volume rendering tool of a free open-source software program for 3D reconstruction of images of surgical sites obtained by MRI volumetric acquisition. We recorded anatomic reference points, such as the sulcus and gyrus, and vascularization patterns for intraoperative localization of lesions. Lesion locations were confirmed during surgery by intraoperative ultrasound and/or electrocorticography and later by postoperative MRI. Between August 2015 and September 2016, a total of 23 surgeries were performed using this technique for 9 low-grade gliomas, 7 high-grade gliomas, 4 cortical dysplasias, and 3 arteriovenous malformations. The technique helped delineate lesions with an overall accuracy of 2.6 ± 1.0 mm. 3D reconstructions were successfully performed in all patients, and images showed sulcus, gyrus, and venous patterns corresponding to the intraoperative images. All lesion areas were confirmed both intraoperatively and at the postoperative evaluation. With the technique described herein, it was possible to successfully perform 3D reconstruction of the cortical surface. This reconstruction tool may serve as an adjunct to neuronavigation systems or may be used alone when such a system is unavailable. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. NO{sub 2} and CO nonattainment areas: a pragmatic approach to source reduction assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cernuschi, S; Giugliano, M; Marzolo, F [D.I.I.A.R. - Politecnico di Milano, Milan (Italy). Environmental Section

    1996-12-31

    Many Italian large conurbations have CO and NO{sub 2} air quality standards exceeded and, consequently, require control enforcement plans to attain and maintain regulations not respected. In planning the interventions to be undertaken a fundamental issue is represented by the relationships between active emission sources and air quality, commonly described through mathematical models for evaluating the expected effects of the different reduction scenarios. This approach, relatively feasible for slightly or non reactive pollutants like CO, becomes not as easy to be applied for secondary pollutants, like NO{sub 2}, mainly for the complex photo chemical reaction systems involved in their atmospheric presence and the consequent modeling difficulties related either to the description of the system itself than to the detail in input data required. A practicable alternative in this framework relies in the utilization of statistical models for deriving, through the empirical description of the relationships between the parameters of interest, the reduction levels required for complying with the standards. Following an approach already applied with useful results to the same area, present work reports on the development and application to Milan urban area of statistical models for describing the relationships between CO and NO{sub x} annual concentration averages and the corresponding air quality standard parameters (number of standard exceedances for CO and 98th percentile of hourly concentrations for NO{sub 2}). The models are utilised upstream to simple roll-back models for the assessment of the reduction in emissions strength required for attaining air quality standards for the area. (author)

  1. NO{sub 2} and CO nonattainment areas: a pragmatic approach to source reduction assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cernuschi, S.; Giugliano, M.; Marzolo, F. [D.I.I.A.R. - Politecnico di Milano, Milan (Italy). Environmental Section

    1995-12-31

    Many Italian large conurbations have CO and NO{sub 2} air quality standards exceeded and, consequently, require control enforcement plans to attain and maintain regulations not respected. In planning the interventions to be undertaken a fundamental issue is represented by the relationships between active emission sources and air quality, commonly described through mathematical models for evaluating the expected effects of the different reduction scenarios. This approach, relatively feasible for slightly or non reactive pollutants like CO, becomes not as easy to be applied for secondary pollutants, like NO{sub 2}, mainly for the complex photo chemical reaction systems involved in their atmospheric presence and the consequent modeling difficulties related either to the description of the system itself than to the detail in input data required. A practicable alternative in this framework relies in the utilization of statistical models for deriving, through the empirical description of the relationships between the parameters of interest, the reduction levels required for complying with the standards. Following an approach already applied with useful results to the same area, present work reports on the development and application to Milan urban area of statistical models for describing the relationships between CO and NO{sub x} annual concentration averages and the corresponding air quality standard parameters (number of standard exceedances for CO and 98th percentile of hourly concentrations for NO{sub 2}). The models are utilised upstream to simple roll-back models for the assessment of the reduction in emissions strength required for attaining air quality standards for the area. (author)

  2. Mineralogical Approaches to Sourcing Pipes and Figurines from the Eastern Woodlands, U.S.A.

    Science.gov (United States)

    Wisseman, S.U.; Moore, D.M.; Hughes, R.E.; Hynes, M.R.; Emerson, T.E.

    2002-01-01

    Provenance studies of stone artifacts often rely heavily upon chemical techniques such as neutron activation analysis. However, stone specimens with very similar chemical composition can have different mineralogies (distinctive crystalline structures as well as variations within the same mineral) that are not revealed by multielemental techniques. Because mineralogical techniques are often cheap and usually nondestructive, beginning with mineralogy allows the researcher to gain valuable information and then to be selective about how many samples are submitted for expensive and somewhat destructive chemical analysis, thus conserving both valuable samples and funds. Our University of Illinois team of archaeologists and geologists employs Portable Infrared Mineral Analyzer (PIMA) spectroscopy, X-ray diffraction (XRD), and Sequential acid dissolution/XRD/Inductively coupled plasma (SAD-XRD-ICP) analyses. Two case studies of Hopewellian pipes and Mississippian figurines illustrate this mineralogical approach. The results for both studies identify sources relatively close to the sites where the artifacts were recovered: Sterling, Illinois (rather than Ohio) for the (Hopewell) pipes and Missouri (rather than Arkansas or Oklahoma) for the Cahokia figurines. ?? 2002 Wiley Periodicals, Inc.

  3. Scenario based approach for multiple source Tsunami Hazard assessment for Sines, Portugal

    Science.gov (United States)

    Wronna, M.; Omira, R.; Baptista, M. A.

    2015-08-01

    In this paper, we present a scenario-based approach for tsunami hazard assessment for the city and harbour of Sines - Portugal, one of the test-sites of project ASTARTE. Sines holds one of the most important deep-water ports which contains oil-bearing, petrochemical, liquid bulk, coal and container terminals. The port and its industrial infrastructures are facing the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING a Non-linear Shallow Water Model With Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages MLLW (mean lower low water), MSL (mean sea level) and MHHW (mean higher high water). For each scenario, inundation is described by maximum values of wave height, flow depth, drawback, runup and inundation distance. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at Sines test site considering the single scenarios at mean sea level, the aggregate scenario and the influence of the tide on the aggregate scenario. The results confirm the composite of Horseshoe and Marques Pombal fault as the worst case scenario. It governs the aggregate scenario with about 60 % and inundates an area of 3.5 km2.

  4. APPROACH TO CONSTRUCTING 3D VIRTUAL SCENE OF IRRIGATION AREA USING MULTI-SOURCE DATA

    Directory of Open Access Journals (Sweden)

    S. Cheng

    2015-10-01

    Full Text Available For an irrigation area that is often complicated by various 3D artificial ground features and natural environment, disadvantages of traditional 2D GIS in spatial data representation, management, query, analysis and visualization is becoming more and more evident. Building a more realistic 3D virtual scene is thus especially urgent for irrigation area managers and decision makers, so that they can carry out various irrigational operations lively and intuitively. Based on previous researchers' achievements, a simple, practical and cost-effective approach was proposed in this study, by adopting3D geographic information system (3D GIS, remote sensing (RS technology. Based on multi-source data such as Google Earth (GE high-resolution remote sensing image, ASTER G-DEM, hydrological facility maps and so on, 3D terrain model and ground feature models were created interactively. Both of the models were then rendered with texture data and integrated under ArcGIS platform. A vivid, realistic 3D virtual scene of irrigation area that has a good visual effect and possesses primary GIS functions about data query and analysis was constructed.Yet, there is still a long way to go for establishing a true 3D GIS for the irrigation are: issues of this study were deeply discussed and future research direction was pointed out in the end of the paper.

  5. WEB 2.0-SUPPORT FOR CHANGE MANAGEMENT DURING BPMS IMPLEMENTATION USING AN OPEN SOURCE APPROACH

    Directory of Open Access Journals (Sweden)

    P.G. Van Schalkwyk

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The authors argue that business process management systems (BPMS are exposed to similar risks of failure as are traditional enterprise resource planning (ERP systems. Change management is a significant critical success factor, and has to be managed well. Given the socio-technical nature of the implementation environment, communication and collaboration are crucially important to the success of change management. The authors provide an example of how collaboration and communication, as part of change management during BPMS implementation, can be achieved in practice, based on the use of Web 2.0 tools and an open source approach.

    AFRIKAANSE OPSOMMING: Die outeurs voer aan dat besigheidsprosesbestuurstelsels aan soortgelyke falings-risiko’s blootgestel is as tradisionele ondernemingshulpbron-stelsels. Veranderings-bestuur is 'n beduidende kritiese suksesfaktor wat goed bestuur moet word. As gevolg van die sosio-tegniese aard van die implementeringsomgewing is kommunikasie en samewerking van deurslaggewende belang as deel van die veranderingsbestuurs-proses.Die outeurs wys met behulp van ’n voorbeeld hoe samewerking en kommunikasie as deel van veranderings-bestuur bewerkstellig kan word met behulp van Web 2.0 gereedskap en ’n oopbron-benadering.

  6. Analyzing the Measurement Equivalence of a Translated Test in a Statewide Assessment Program

    Directory of Open Access Journals (Sweden)

    Jorge Carvajal-Espinoza

    2016-09-01

    Full Text Available When tests are translated into one or more languages, the question of the equivalence of items across language forms arises. This equivalence can be assessed at the scale level by means of a multiple group confirmatory factor analysis (CFA in the context of structural equation modeling. This study examined the measurement equivalence of a Spanish translated version of a statewide Mathematics test originally constructed in English by using a multi-group CFA approach. The study used samples of native speakers of the target language of the translation taking the test in both the source and target language, specifically Hispanics taking the test in English and Spanish. Test items were grouped in twelve facet-representative parcels. The parceling was accomplished by grouping items that corresponded to similar content and computing an average for each parcel. Four models were fitted to examine the equivalence of the test across groups. The multi-group CFA fixed factor loadings across groups and results supported the equivalence of the two language versions (English and Spanish of the test. The statistical techniques implemented in this study can also be used to address the performance on a test based on dichotomous or dichotomized variables such as gender, socioeconomic status, geographic location and other variables of interest.

  7. Legislating for occupational exposure to sources of natural radiation- the UK approach

    International Nuclear Information System (INIS)

    Higham, N.; Walker, S.; Thomas, G.

    2004-01-01

    Title VII of EC Directive 96/29/Euratom (the 1996 BSS Directive) for the first time requires Member States to take action in relation to work activities within which the presence of natural radiation sources leads to a significant increase in the exposure of workers or members of the public which cannot be disregarded from the radiation protection point of view. The UK in fact has had legal requirements relating to occupational exposure to natural radiation sources since 1985, in the Ionising Radiations Regulations 1985, made to implement the bulk of the provisions of the previous BSS Directive (80/836/Euratom, as amended by 84/467/Euratom). The Ionising Radiations Regulations 1999, that implement the worker protection requirements of the 1996 Euratom BSS Directive, include similar provisions. The definition of radioactive substance includes any substance which contains one or more radionuclides whose activity cannot be disregarded for the purposes of radiation protection. This means that some low specific activity ores and sands fall within this definition and are therefore subject to relevant requirements of the Regulations. Further advice is given on circumstances in which this may apply. Radon is covered more explicitly by applying the regulations to any work carried out in an atmosphere containing radon 222 gas at a concentration in air, averaged over any 24 hour period, exceeding 400 Bq m-3 except where the concentration of the short-lived daughters of radon 222 in air averaged over any 8 hour working period does not exceed 6.24 x 10-7Jm-3. The Health and Safety Executive pursues a policy of raising awareness of the potential for exposure to radon in the workplace and targeting those employers likely to have a radon problem (based on the use of existing information on homes). The regulatory approach has been to seek remedial building measures so that the workplace is removed from control. HSE is able to offer advice about getting their workplace tested and

  8. Microwave implementation of two-source energy balance approach for estimating evapotranspiration

    Directory of Open Access Journals (Sweden)

    T. R. H. Holmes

    2018-02-01

    Full Text Available A newly developed microwave (MW land surface temperature (LST product is used to substitute thermal infrared (TIR-based LST in the Atmosphere–Land Exchange Inverse (ALEXI modeling framework for estimating evapotranspiration (ET from space. ALEXI implements a two-source energy balance (TSEB land surface scheme in a time-differential approach, designed to minimize sensitivity to absolute biases in input records of LST through the analysis of the rate of temperature change in the morning. Thermal infrared retrievals of the diurnal LST curve, traditionally from geostationary platforms, are hindered by cloud cover, reducing model coverage on any given day. This study tests the utility of diurnal temperature information retrieved from a constellation of satellites with microwave radiometers that together provide six to eight observations of Ka-band brightness temperature per location per day. This represents the first ever attempt at a global implementation of ALEXI with MW-based LST and is intended as the first step towards providing all-weather capability to the ALEXI framework. The analysis is based on 9-year-long, global records of ALEXI ET generated using both MW- and TIR-based diurnal LST information as input. In this study, the MW-LST (MW-based LST sampling is restricted to the same clear-sky days as in the IR-based implementation to be able to analyze the impact of changing the LST dataset separately from the impact of sampling all-sky conditions. The results show that long-term bulk ET estimates from both LST sources agree well, with a spatial correlation of 92 % for total ET in the Europe–Africa domain and agreement in seasonal (3-month totals of 83–97 % depending on the time of year. Most importantly, the ALEXI-MW (MW-based ALEXI also matches ALEXI-IR (IR-based ALEXI very closely in terms of 3-month inter-annual anomalies, demonstrating its ability to capture the development and extent of drought conditions. Weekly ET output

  9. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    Science.gov (United States)

    Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.

    2012-12-01

    Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed

  10. Classical-Equivalent Bayesian Portfolio Optimization for Electricity Generation Planning

    Directory of Open Access Journals (Sweden)

    Hellinton H. Takada

    2018-01-01

    Full Text Available There are several electricity generation technologies based on different sources such as wind, biomass, gas, coal, and so on. The consideration of the uncertainties associated with the future costs of such technologies is crucial for planning purposes. In the literature, the allocation of resources in the available technologies has been solved as a mean-variance optimization problem assuming knowledge of the expected values and the covariance matrix of the costs. However, in practice, they are not exactly known parameters. Consequently, the obtained optimal allocations from the mean-variance optimization are not robust to possible estimation errors of such parameters. Additionally, it is usual to have electricity generation technology specialists participating in the planning processes and, obviously, the consideration of useful prior information based on their previous experience is of utmost importance. The Bayesian models consider not only the uncertainty in the parameters, but also the prior information from the specialists. In this paper, we introduce the classical-equivalent Bayesian mean-variance optimization to solve the electricity generation planning problem using both improper and proper prior distributions for the parameters. In order to illustrate our approach, we present an application comparing the classical-equivalent Bayesian with the naive mean-variance optimal portfolios.

  11. Modelling of dynamic equivalents in electric power grids

    International Nuclear Information System (INIS)

    Craciun, Diana Iuliana

    2010-01-01

    In a first part, this research thesis proposes a description of the context and new constraints of electric grids: architecture, decentralized production with the impact of distributed energy resource systems, dynamic simulation, and interest of equivalent models. Then, the author discusses the modelling of the different components of electric grids: synchronous and asynchronous machines, distributed energy resource with power electronic interface, loading models. She addresses the techniques of reduction of electric grid models: conventional reduction methods, dynamic equivalence methods using non linear approaches or evolutionary algorithm-based methods of assessment of parameters. This last approach is then developed and implemented, and a new method of computation of dynamic equivalents is described

  12. Symmetries of dynamically equivalent theories

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D.M.; Tyutin, I.V. [Sao Paulo Univ., SP (Brazil). Inst. de Fisica; Lebedev Physics Institute, Moscow (Russian Federation)

    2006-03-15

    A natural and very important development of constrained system theory is a detail study of the relation between the constraint structure in the Hamiltonian formulation with specific features of the theory in the Lagrangian formulation, especially the relation between the constraint structure with the symmetries of the Lagrangian action. An important preliminary step in this direction is a strict demonstration, and this is the aim of the present article, that the symmetry structures of the Hamiltonian action and of the Lagrangian action are the same. This proved, it is sufficient to consider the symmetry structure of the Hamiltonian action. The latter problem is, in some sense, simpler because the Hamiltonian action is a first-order action. At the same time, the study of the symmetry of the Hamiltonian action naturally involves Hamiltonian constraints as basic objects. One can see that the Lagrangian and Hamiltonian actions are dynamically equivalent. This is why, in the present article, we consider from the very beginning a more general problem: how the symmetry structures of dynamically equivalent actions are related. First, we present some necessary notions and relations concerning infinitesimal symmetries in general, as well as a strict definition of dynamically equivalent actions. Finally, we demonstrate that there exists an isomorphism between classes of equivalent symmetries of dynamically equivalent actions. (author)

  13. On equivalent resistance of electrical circuits

    Science.gov (United States)

    Kagan, Mikhail

    2015-01-01

    While the standard (introductory physics) way of computing the equivalent resistance of nontrivial electrical circuits is based on Kirchhoff's rules, there is a mathematically and conceptually simpler approach, called the method of nodal potentials, whose basic variables are the values of the electric potential at the circuit's nodes. In this paper, we review the method of nodal potentials and illustrate it using the Wheatstone bridge as an example. We then derive a closed-form expression for the equivalent resistance of a generic circuit, which we apply to a few sample circuits. The result unveils a curious interplay between electrical circuits, matrix algebra, and graph theory and its applications to computer science. The paper is written at a level accessible by undergraduate students who are familiar with matrix arithmetic. Additional proofs and technical details are provided in appendices.

  14. [A landscape ecological approach for urban non-point source pollution control].

    Science.gov (United States)

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  15. Nitrate source apportionment using a combined dual isotope, chemical and bacterial property, and Bayesian model approach in river systems

    Science.gov (United States)

    Xia, Yongqiu; Li, Yuefei; Zhang, Xinyu; Yan, Xiaoyuan

    2017-01-01

    Nitrate (NO3-) pollution is a serious problem worldwide, particularly in countries with intensive agricultural and population activities. Previous studies have used δ15N-NO3- and δ18O-NO3- to determine the NO3- sources in rivers. However, this approach is subject to substantial uncertainties and limitations because of the numerous NO3- sources, the wide isotopic ranges, and the existing isotopic fractionations. In this study, we outline a combined procedure for improving the determination of NO3- sources in a paddy agriculture-urban gradient watershed in eastern China. First, the main sources of NO3- in the Qinhuai River were examined by the dual-isotope biplot approach, in which we narrowed the isotope ranges using site-specific isotopic results. Next, the bacterial groups and chemical properties of the river water were analyzed to verify these sources. Finally, we introduced a Bayesian model to apportion the spatiotemporal variations of the NO3- sources. Denitrification was first incorporated into the Bayesian model because denitrification plays an important role in the nitrogen pathway. The results showed that fertilizer contributed large amounts of NO3- to the surface water in traditional agricultural regions, whereas manure effluents were the dominant NO3- source in intensified agricultural regions, especially during the wet seasons. Sewage effluents were important in all three land uses and exhibited great differences between the dry season and the wet season. This combined analysis quantitatively delineates the proportion of NO3- sources from paddy agriculture to urban river water for both dry and wet seasons and incorporates isotopic fractionation and uncertainties in the source compositions.

  16. Matching of equivalent field regions

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen; Rengarajan, S.B.

    2005-01-01

    In aperture problems, integral equations for equivalent currents are often found by enforcing matching of equivalent fields. The enforcement is made in the aperture surface region adjoining the two volumes on each side of the aperture. In the case of an aperture in a planar perfectly conducting...... screen, having the same homogeneous medium on both sides and an impressed current on one aide, an alternative procedure is relevant. We make use of the fact that in the aperture the tangential component of the magnetic field due to the induced currents in the screen is zero. The use of such a procedure...... shows that equivalent currents can be found by a consideration of only one of the two volumes into which the aperture plane divides the space. Furthermore, from a consideration of an automatic matching at the aperture, additional information about tangential as well as normal field components...

  17. Teleparallel equivalent of Lovelock gravity

    Science.gov (United States)

    González, P. A.; Vásquez, Yerko

    2015-12-01

    There is a growing interest in modified gravity theories based on torsion, as these theories exhibit interesting cosmological implications. In this work inspired by the teleparallel formulation of general relativity, we present its extension to Lovelock gravity known as the most natural extension of general relativity in higher-dimensional space-times. First, we review the teleparallel equivalent of general relativity and Gauss-Bonnet gravity, and then we construct the teleparallel equivalent of Lovelock gravity. In order to achieve this goal, we use the vielbein and the connection without imposing the Weitzenböck connection. Then, we extract the teleparallel formulation of the theory by setting the curvature to null.

  18. Attainment of radiation equivalency principle

    International Nuclear Information System (INIS)

    Shmelev, A.N.; Apseh, V.A.

    2004-01-01

    Problems connected with the prospects for long-term development of the nuclear energetics are discussed. Basic principles of the future large-scale nuclear energetics are listed, primary attention is the safety of radioactive waste management of nuclear energetics. The radiation equivalence principle means close of fuel cycle and management of nuclear materials transportation with low losses on spent fuel and waste processing. Two aspects are considered: radiation equivalence in global and local aspects. The necessity of looking for other strategies of fuel cycle management in full-scale nuclear energy on radioactive waste management is supported [ru

  19. SINQ - a continuous spallation neutron source (an approach to 1 MWatt of beam power)

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, W.E. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1995-11-01

    In this status report we describe the continuous spallation source at PSI, which will come into operation in fall 1996. We present the present state of the construction work and review the expected performance of the source. (author) 10 figs., 2 tabs., refs.

  20. SINQ - a continuous spallation neutron source (an approach to 1 MWatt of beam power)

    International Nuclear Information System (INIS)

    Fischer, W.E.

    1995-01-01

    In this status report we describe the continuous spallation source at PSI, which will come into operation in fall 1996. We present the present state of the construction work and review the expected performance of the source. (author) 10 figs., 2 tabs., refs

  1. A new optimization approach for source-encoding full-waveform inversion

    NARCIS (Netherlands)

    Moghaddam, P.P.; Keers, H.; Herrmann, F.J.; Mulder, W.A.

    2013-01-01

    Waveform inversion is the method of choice for determining a highly heterogeneous subsurface structure. However, conventional waveform inversion requires that the wavefield for each source is computed separately. This makes it very expensive for realistic 3D seismic surveys. Source-encoding waveform

  2. A wavenumber approach to analysing the active control of plane waves with arrays of secondary sources

    Science.gov (United States)

    Elliott, Stephen J.; Cheer, Jordan; Bhan, Lam; Shi, Chuang; Gan, Woon-Seng

    2018-04-01

    The active control of an incident sound field with an array of secondary sources is a fundamental problem in active control. In this paper the optimal performance of an infinite array of secondary sources in controlling a plane incident sound wave is first considered in free space. An analytic solution for normal incidence plane waves is presented, indicating a clear cut-off frequency for good performance, when the separation distance between the uniformly-spaced sources is equal to a wavelength. The extent of the near field pressure close to the source array is also quantified, since this determines the positions of the error microphones in a practical arrangement. The theory is also extended to oblique incident waves. This result is then compared with numerical simulations of controlling the sound power radiated through an open aperture in a rigid wall, subject to an incident plane wave, using an array of secondary sources in the aperture. In this case the diffraction through the aperture becomes important when its size is compatible with the acoustic wavelength, in which case only a few sources are necessary for good control. When the size of the aperture is large compared to the wavelength, and diffraction is less important but more secondary sources need to be used for good control, the results then become similar to those for the free field problem with an infinite source array.

  3. Modern software approaches applied to a Hydrological model: the GEOtop Open-Source Software Project

    Science.gov (United States)

    Cozzini, Stefano; Endrizzi, Stefano; Cordano, Emanuele; Bertoldi, Giacomo; Dall'Amico, Matteo

    2017-04-01

    The GEOtop hydrological scientific package is an integrated hydrological model that simulates the heat and water budgets at and below the soil surface. It describes the three-dimensional water flow in the soil and the energy exchange with the atmosphere, considering the radiative and turbulent fluxes. Furthermore, it reproduces the highly non-linear interactions between the water and energy balance during soil freezing and thawing, and simulates the temporal evolution of snow cover, soil temperature and moisture. The core components of the package were presented in the 2.0 version (Endrizzi et al, 2014), which was released as Free Software Open-source project. However, despite the high scientific quality of the project, a modern software engineering approach was still missing. Such weakness hindered its scientific potential and its use both as a standalone package and, more importantly, in an integrate way with other hydrological software tools. In this contribution we present our recent software re-engineering efforts to create a robust and stable scientific software package open to the hydrological community, easily usable by researchers and experts, and interoperable with other packages. The activity takes as a starting point the 2.0 version, scientifically tested and published. This version, together with several test cases based on recent published or available GEOtop applications (Cordano and Rigon, 2013, WRR, Kollet et al, 2016, WRR) provides the baseline code and a certain number of referenced results as benchmark. Comparison and scientific validation can then be performed for each software re-engineering activity performed on the package. To keep track of any single change the package is published on its own github repository geotopmodel.github.io/geotop/ under GPL v3.0 license. A Continuous Integration mechanism by means of Travis-CI has been enabled on the github repository on master and main development branches. The usage of CMake configuration tool

  4. {sup 103}Pd strings: Monte Carlo assessment of a new approach to brachytherapy source design

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, Mark J., E-mail: mark.j.rivard@gmail.com [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Reed, Joshua L.; DeWerd, Larry A. [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)

    2014-01-15

    Purpose: A new type of{sup 103}Pd source (CivaString and CivaThin by CivaTech Oncology, Inc.) is examined. The source contains {sup 103}Pd and Au radio-opaque marker(s), all contained within low-Z{sub eff} organic polymers that permit source flexibility. The CivaString source is available in lengths L of 10, 20, 30, 40, 50, and 60 mm, and referred to in the current study as CS10–CS60, respectively. A thinner design, CivaThin, has sources designated as CT10–CT60, respectively. The CivaString and CivaThin sources are 0.85 and 0.60 mm in diameter, respectively. The source design is novel and offers an opportunity to examine its interesting dosimetric properties in comparison to conventional {sup 103}Pd seeds. Methods: The MCNP5 radiation transport code was used to estimate air-kerma rate and dose rate distributions with polar and cylindrical coordinate systems. Doses in water and prostate tissue phantoms were compared to determine differences between the TG-43 formalism and realistic clinical circumstances. The influence of Ti encapsulation and 2.7 keV photons was examined. The accuracy of superposition of dose distributions from shorter sources to create longer source dose distributions was also assessed. Results: The normalized air-kerma rate was not highly dependent onL or the polar angle θ, with results being nearly identical between the CivaString and CivaThin sources for common L. The air-kerma strength was also weakly dependent on L. The uncertainty analysis established a standard uncertainty of 1.3% for the dose-rate constant Λ, where the largest contributors were μ{sub en}/ρ and μ/ρ. The Λ values decreased with increasing L, which was largely explained by differences in solid angle. The radial dose function did not substantially vary among the CivaString and CivaThin sources for r ≥ 1 cm. However, behavior for r < 1 cm indicated that the Au marker(s) shielded radiation for the sources having L = 10, 30, and 50 mm. The 2D anisotropy function

  5. Information sources in biomedical science and medical journalism: methodological approaches and assessment.

    Science.gov (United States)

    Miranda, Giovanna F; Vercellesi, Luisa; Bruno, Flavia

    2004-09-01

    Throughout the world the public is showing increasing interest in medical and scientific subjects and journalists largely spread this information, with an important impact on knowledge and health. Clearly, therefore, the relationship between the journalist and his sources is delicate: freedom and independence of information depend on the independence and truthfulness of the sources. The new "precision journalism" holds that scientific methods should be applied to journalism, so authoritative sources are a common need for journalists and scientists. We therefore compared the individual classifications and methods of assessing of sources in biomedical science and medical journalism to try to extrapolate scientific methods of evaluation to journalism. In journalism and science terms used to classify sources of information show some similarities, but their meanings are different. In science primary and secondary classes of information, for instance, refer to the levels of processing, but in journalism to the official nature of the source itself. Scientists and journalists must both always consult as many sources as possible and check their authoritativeness, reliability, completeness, up-to-dateness and balance. In journalism, however, there are some important differences and limits: too many sources can sometimes diminish the quality of the information. The sources serve a first filter between the event and the journalist, who is not providing the reader with the fact, but with its projection. Journalists have time constraints and lack the objective criteria for searching, the specific background knowledge, and the expertise to fully assess sources. To assist in understanding the wealth of sources of information in journalism, we have prepared a checklist of items and questions. There are at least four fundamental points that a good journalist, like any scientist, should know: how to find the latest information (the sources), how to assess it (the quality and

  6. Comments on field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1987-01-01

    It is pointed Out that often-used arguments based on a short-circuit concept in presentations of field equivalence principles are not correct. An alternative presentation based on the uniqueness theorem is given. It does not contradict the results obtained by using the short-circuit concept...

  7. Using the “Footprint” Approach to Examine the Potentials and Impacts of Renewable Energy Sources in the European Alps

    Directory of Open Access Journals (Sweden)

    Richard Hastik

    2016-05-01

    Full Text Available The expansion of renewable energies is regarded as a key way to mitigate global climate change and to ensure the provision of energy in the long term. However, conflicts between these goals and local nature conservation goals are likely to increase because of the additional space required for renewable energies. This is particularly true for mountainous areas with biodiversity-rich ecosystems. Little effort has been undertaken to systematically compare different renewable energy sources and to examine their environmental impacts using an interdisciplinary approach. This study adapted the concept of the “ecological footprint” to examine the impact on ecosystem services of land use changes involved in exploiting renewable energy sources. This innovative approach made it possible to assess and communicate the potentials of those energy sources in light of both space consumption and sustainability. The European Alps are an ideal test area because of their high energy potentials and biodiversity-rich ecosystems and the high demand for multiple ecosystem services. Our results demonstrate that energy consumption in the Alps could not be covered with the available renewable energy potentials, despite the utilization of large parts of the Alpine land area and the majority of larger rivers. Therefore, considerable effort must be invested in resolving conflicting priorities between expanding renewable energies and nature conservation, but also in realizing energy-saving measures. To this end, the approach presented here can support decision-making by revealing the energy potentials, space requirements, and environmental impacts of different renewable energy sources.

  8. Source of uranium in the Elblag formation (upper buntsandstein): sedimentological approach

    International Nuclear Information System (INIS)

    Jaworowski, K.

    1986-01-01

    On the basis of results of sedimentological survey of area of typical development of the Elblag formation, the evidence supporting previously published hypothesis on source of uranium in this formation is presented. 10 refs., 9 figs. (M.F.W.)

  9. Non point source pollution modelling in the watershed managed by Integrated Conctructed Wetlands: A GIS approach.

    OpenAIRE

    Vyavahare, Nilesh

    2008-01-01

    The non-point source pollution has been recognised as main cause of eutrophication in Ireland (EPA Ireland, 2001). Integrated Constructed Wetland (ICW) is a management practice adopted in Annestown stream watershed, located in the south county of Waterford in Ireland, used to cleanse farmyard runoff. Present study forms the annual pollution budget for the Annestown stream watershed. The amount of pollution from non-point sources flowing into the stream was simulated by using GIS techniques; u...

  10. A Method of Auxiliary Sources Approach for Modelling the Impact of Ground Planes on Antenna

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2006-01-01

    The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements......The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements...

  11. Differential utility of the Bacteroidales DNA and RNA markers in the tiered approach for microbial source tracking in subtropical seawater.

    Science.gov (United States)

    Liu, Rulong; Cheng, Ken H F; Wong, Klaine; Cheng, Samuel C S; Lau, Stanley C K

    2015-07-01

    Source tracking of fecal pollution is an emerging component in water quality monitoring. It may be implemented in a tiered approach involving Escherichia coli and/or Enterococcus spp. as the standard fecal indicator bacteria (FIB) and the 16S rRNA gene markers of Bacteroidales as source identifiers. The relative population dynamics of the source identifiers and the FIB may strongly influence the implementation of such approach. Currently, the relative performance of DNA and RNA as detection targets of Bacteroidales markers in the tiered approach is not known. We compared the decay of the DNA and RNA of the total (AllBac) and ruminant specific (CF128) Bacteroidales markers with those of the FIB in seawater spiked with cattle feces. Four treatments of light and oxygen availability simulating the subtropical seawater of Hong Kong were tested. All Bacteroidales markers decayed significantly slower than the FIB in all treatments. Nonetheless, the concentrations of the DNA and RNA markers and E. coli correlated significantly in normoxic seawater independent of light availability, and in hypoxic seawater only under light. In hypoxic seawater without light, the concentrations of RNA but not DNA markers correlated with that of E. coli. Generally, the correlations between Enterococcus spp. and Bacteroidales were insignificant. These results suggest that either DNA or RNA markers may complement E. coli in the tiered approach for normoxic or hypoxic seawater under light. When light is absent, either DNA or RNA markers may serve for normoxic seawater, but only the RNA markers are suitable for hypoxic seawater.

  12. Methodology for safety and security of radioactive sources and materials. The Israeli approach

    International Nuclear Information System (INIS)

    Keren, M.

    1998-01-01

    About 10 Radioactive incidents occurred in Israel during 1996-1997. Some of them were theft or lost of Radioactive equipment or sources, some happened because misuse of Radioactive equipment and some of other reasons. Part of them could be eliminated if a better methodological attitude to the subject existed. A new methodology for notification, registration and licensing is described. Hopefully this methodology will increase defense in depth and the Safety and Security of Radioactive sources and materials. Information on the inventory of Radioactive sources and materials is essential. Where they are situated, what is the supply rate or all history from berth to grave. Persons involved are important: Who are the Radiation Safety Officers (RSO), what is their training and updating programs. As much as possible information on the site and places where those Radioactive sources and materials are used. Procedures for security of sources and materials is part of site information, beside safety precautions. Users are obliged to inform on any changes and to ask for confirmation to those changes. The same is when high activity sources are moved across the country. (author)

  13. Identification of the sources of primary organic aerosols at urban schools: A molecular marker approach

    International Nuclear Information System (INIS)

    Crilley, Leigh R.; Qadir, Raeed M.; Ayoko, Godwin A.; Schnelle-Kreis, Jürgen; Abbaszade, Gülcin; Orasche, Jürgen; Zimmermann, Ralf; Morawska, Lidia

    2014-01-01

    Children are particularly susceptible to air pollution and schools are examples of urban microenvironments that can account for a large portion of children's exposure to airborne particles. Thus this paper aimed to determine the sources of primary airborne particles that children are exposed to at school by analyzing selected organic molecular markers at 11 urban schools in Brisbane, Australia. Positive matrix factorization analysis identified four sources at the schools: vehicle emissions, biomass burning, meat cooking and plant wax emissions accounting for 45%, 29%, 16% and 7%, of the organic carbon respectively. Biomass burning peaked in winter due to prescribed burning of bushland around Brisbane. Overall, the results indicated that both local (traffic) and regional (biomass burning) sources of primary organic aerosols influence the levels of ambient particles that children are exposed at the schools. These results have implications for potential control strategies for mitigating exposure at schools. - Highlights: • Selected organic molecular markers at 11 urban schools were analyzed. • Four sources of primary organic aerosols were identified by PMF at the schools. • Both local and regional sources were found to influence exposure at the schools. • The results have implications for mitigation of children's exposure at schools. - The identification of the most important sources of primary organic aerosols at urban schools has implications for control strategies for mitigating children's exposure at schools

  14. Tracing the Nitrate Sources of the Yili River in the Taihu Lake Watershed: A Dual Isotope Approach

    Directory of Open Access Journals (Sweden)

    Haiao Zeng

    2014-12-01

    Full Text Available As the third largest freshwater lake in China, Taihu Lake has experienced severe cyanobacterial blooms and associated water quality degradation in recent decades, threatening the human health and sustainable development of cities in the watershed. The Yili River is a main river of Taihu Lake, contributing about 30% of the total nitrogen load entering the lake. Tracing the nitrate sources of Yili River can inform the origin of eutrophication in Taihu Lake and provide hints for effective control measures. This paper explored the nitrate sources and cycling of the Yili River based on dual nitrogen (δ15N and oxygen (δ18O isotopic compositions. Water samples collected during both the wet and dry seasons from different parts of the Yili River permitted the analysis of the seasonal and spatial variations of nitrate concentrations and sources. Results indicated that the wet season has higher nitrate concentrations than the dry season despite the stronger dilution effects, suggesting a greater potential of cyanobacterial blooms in summer. The δ15N-NO3− values were in the range of 4.0‰–14.0‰ in the wet season and 4.8‰–16.9‰ in dry, while the equivalent values of δ18O were 0.5‰–17.8‰ and 3.5‰–15.6‰, respectively. The distribution of δ15N-NO3− and δ18O-NO3− indicated that sewage and manure as well as fertilizer and soil organic matter were the major nitrate sources of the Yili River. Atmospheric deposition was an important nitrate source in the upper part of Yili River but less so in the middle and lower reaches due to increasing anthropogenic contamination. Moreover, there was a positive relationship between δ18O-NO3− and δ15N-NO3− in the wet season, indicating a certain extent of denitrification. In contrast, the δ18O-δ15N relationship in the dry season was significantly negative, suggesting that the δ15N and δ18O values were determined by a mixing of different nitrate sources.

  15. EEG source space analysis of the supervised factor analytic approach for the classification of multi-directional arm movement

    Science.gov (United States)

    Shenoy Handiru, Vikram; Vinod, A. P.; Guan, Cuntai

    2017-08-01

    Objective. In electroencephalography (EEG)-based brain-computer interface (BCI) systems for motor control tasks the conventional practice is to decode motor intentions by using scalp EEG. However, scalp EEG only reveals certain limited information about the complex tasks of movement with a higher degree of freedom. Therefore, our objective is to investigate the effectiveness of source-space EEG in extracting relevant features that discriminate arm movement in multiple directions. Approach. We have proposed a novel feature extraction algorithm based on supervised factor analysis that models the data from source-space EEG. To this end, we computed the features from the source dipoles confined to Brodmann areas of interest (BA4a, BA4p and BA6). Further, we embedded class-wise labels of multi-direction (multi-class) source-space EEG to an unsupervised factor analysis to make it into a supervised learning method. Main Results. Our approach provided an average decoding accuracy of 71% for the classification of hand movement in four orthogonal directions, that is significantly higher (>10%) than the classification accuracy obtained using state-of-the-art spatial pattern features in sensor space. Also, the group analysis on the spectral characteristics of source-space EEG indicates that the slow cortical potentials from a set of cortical source dipoles reveal discriminative information regarding the movement parameter, direction. Significance. This study presents evidence that low-frequency components in the source space play an important role in movement kinematics, and thus it may lead to new strategies for BCI-based neurorehabilitation.

  16. The performance of low pressure tissue-equivalent chambers and a new method for parameterising the dose equivalent

    International Nuclear Information System (INIS)

    Eisen, Y.

    1986-01-01

    The performance of Rossi-type spherical tissue-equivalent chambers with equivalent diameters between 0.5 μm and 2 μm was tested experimentally using monoenergetic and polyenergetic neutron sources in the energy region of 10 keV to 14.5 MeV. In agreement with theoretical predictions both chambers failed to provide LET information at low neutron energies. A dose equivalent algorithm was derived that utilises the event distribution but does not attempt to correlate event size with LET. The algorithm was predicted theoretically and confirmed by experiment. The algorithm that was developed determines the neutron dose equivalent, from the data of the 0.5 μm chamber, to better than +-20% over the energy range of 30 keV to 14.5 MeV. The same algorithm also determines the dose equivalent from the data of the 2 μm chamber to better than +-20% over the energy range of 60 keV to 14.5 MeV. The efficiency of the chambers is 33 counts per μSv, or equivalently about 10 counts s -1 per mSv.h -1 . This efficiency enables the measurement of dose equivalent rates above 1 mSv.h -1 for an integration period of 3 s. Integrated dose equivalents can be measured as low as 1 μSv. (author)

  17. Total organic carbon, an important tool in a holistic approach to hydrocarbon source fingerprinting

    International Nuclear Information System (INIS)

    Boehm, P.D.; Burns, W.A.; Page, D.S.; Bence, A.E.; Mankiewicz, P.J.; Brown, J.S.; Douglas, G.S.

    2002-01-01

    Total organic carbon (TOC) was used to verify the consistency of source allocation results for the natural petrogenic hydrocarbon background of the northern Gulf of Alaska and Prince William Sound where the Exxon Valdez oil spill occurred in 1998. The samples used in the study were either pre-spill sediments or from the seafloor outside the spill path. It is assumed that the natural petrogenic hydrocarbon background in the area comes from either seep oil residues and shale erosion including erosion from petroleum source rock shales, or from coals including those of the Bering River coalfields. The objective of this study was to use the TOC calculations to discriminate between the two very different sources. TOC can constrain the contributions of specific sources and rule out incorrect source allocations, particularly when inputs are dominated by fossil organic carbon. The benthic sediments used in this study showed excellent agreement between measured TOC and calculated TOC from hydrocarbon fingerprint matches of polycyclic aromatic hydrocarbons (PAH) and chemical biomarkers. TOC and fingerprint matches confirmed that TOC sources were properly identified. The matches quantify the hydrocarbon contributions of different sources to the benthic sediments and the degree of hydrocarbon winnowing by waves and currents. It was concluded that the natural petrogenic hydrocarbon background in the sediments in the area comes from eroding Tertiary shales and oil seeps along the northern Gulf of Alaska coast. Thermally mature area coals are excluded from being important contributors to the background at Prince William Sound because of their high TOC content. 26 refs., 4 figs

  18. Total organic carbon, an important tool in a holistic approach to hydrocarbon source fingerprinting

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, P.D. [Battelle, Waltham, MA (United States); Burns, W.A. [W.A. Burns Consulting Services, Houston, TX (United States); Page, D.S. [Bowdoin College, Brunswick, ME (United States); Bence, A.E.; Mankiewicz, P.J. [ExxonMobil Upstream Research Co., Houston, TX (United States); Brown, J.S.; Douglas, G.S. [Battelle, Duxbury, MA (United States)

    2002-07-01

    Total organic carbon (TOC) was used to verify the consistency of source allocation results for the natural petrogenic hydrocarbon background of the northern Gulf of Alaska and Prince William Sound where the Exxon Valdez oil spill occurred in 1998. The samples used in the study were either pre-spill sediments or from the seafloor outside the spill path. It is assumed that the natural petrogenic hydrocarbon background in the area comes from either seep oil residues and shale erosion including erosion from petroleum source rock shales, or from coals including those of the Bering River coalfields. The objective of this study was to use the TOC calculations to discriminate between the two very different sources. TOC can constrain the contributions of specific sources and rule out incorrect source allocations, particularly when inputs are dominated by fossil organic carbon. The benthic sediments used in this study showed excellent agreement between measured TOC and calculated TOC from hydrocarbon fingerprint matches of polycyclic aromatic hydrocarbons (PAH) and chemical biomarkers. TOC and fingerprint matches confirmed that TOC sources were properly identified. The matches quantify the hydrocarbon contributions of different sources to the benthic sediments and the degree of hydrocarbon winnowing by waves and currents. It was concluded that the natural petrogenic hydrocarbon background in the sediments in the area comes from eroding Tertiary shales and oil seeps along the northern Gulf of Alaska coast. Thermally mature area coals are excluded from being important contributors to the background at Prince William Sound because of their high TOC content. 26 refs., 4 figs.

  19. Establishment of a Practical Approach for Characterizing the Source of Particulates in Water Distribution Systems

    Directory of Open Access Journals (Sweden)

    Seon-Ha Chae

    2016-02-01

    Full Text Available Water quality complaints related to particulate matter and discolored water can be troublesome for water utilities in terms of follow-up investigations and implementation of appropriate actions because particulate matter can enter from a variety of sources; moreover, physicochemical processes can affect the water quality during the purification and transportation processes. The origin of particulates can be attributed to sources such as background organic/inorganic materials from water sources, water treatment plants, water distribution pipelines that have deteriorated, and rehabilitation activities in the water distribution systems. In this study, a practical method is proposed for tracing particulate sources. The method entails collecting information related to hydraulic, water quality, and structural conditions, employing a network flow-path model, and establishing a database of physicochemical properties for tubercles and slimes. The proposed method was implemented within two city water distribution systems that were located in Korea. These applications were conducted to demonstrate the practical applicability of the method for providing solutions to customer complaints. The results of the field studies indicated that the proposed method would be feasible for investigating the sources of particulates and for preparing appropriate action plans for complaints related to particulate matter.

  20. Interactions between electricity generation sources and economic activity in Greece: A VECM approach

    International Nuclear Information System (INIS)

    Marques, António Cardoso; Fuinhas, José Alberto; Menegaki, Angeliki N.

    2014-01-01

    Highlights: • Adjustment dynamics in electricity sources and industrial production are examined. • The Johansen’s method with a conditional VEC model was pursued. • Results confirm endogeneity among variables. • Cointegration relationships for fossil and for renewables sources were founded. • Renewables are less affected by disturbance in economic activity. - Abstract: The interactions between electricity generation sources and industrial production in Greece were analysed from August 2004 to October 2013. Greece has been subject to a tough economic adjustment under external financial assistance guidelines. In the meantime, the country has remained committed to international agreements concerning the use of renewables. The variables interact with each other, and this endogeneity has been analysed using a VECM model. A short-run, causal relationship from conventional fossil sources to economic growth, was proved. However, there is no evidence of causal relationships from renewable electricity to economic growth, either in the short- or long-run. Only economic growth gives rise to renewable electricity, whether in the short- or long-run. A fresh insight on the current state of dynamics between electricity sources within an electricity generation system, is thus added to the literature. These findings will inform energy policymakers in designing policies both to encourage the incorporation of national technology into renewables and to reduce electricity consumption without hampering economic growth

  1. swLORETA: a novel approach to robust source localization and synchronization tomography

    International Nuclear Information System (INIS)

    Palmero-Soler, Ernesto; Dolan, Kevin; Hadamschek, Volker; Tass, Peter A

    2007-01-01

    Standardized low-resolution brain electromagnetic tomography (sLORETA) is a widely used technique for source localization. However, this technique still has some limitations, especially under realistic noisy conditions and in the case of deep sources. To overcome these problems, we present here swLORETA, an improved version of sLORETA, obtained by incorporating a singular value decomposition-based lead field weighting. We show that the precision of the source localization can further be improved by a tomographic phase synchronization analysis based on swLORETA. The phase synchronization analysis turns out to be superior to a standard linear coherence analysis, since the latter cannot distinguish between real phase locking and signal mixing

  2. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    Science.gov (United States)

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  3. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    Directory of Open Access Journals (Sweden)

    Rasheda Arman Chowdhury

    Full Text Available Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG or Magneto-EncephaloGraphy (MEG signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i brain activity may be modeled using cortical parcels and (ii brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM and the Hierarchical Bayesian (HB source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2 to 30 cm(2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  4. Human health risk assessment screening approach for evaluating contaminants at source control and integrator operable units

    International Nuclear Information System (INIS)

    Blaylock, B.G.; Frank, M.L.; Hoffman, F.O.; Miller, P.D.; White, R.K.; Purucker, S.T.; Redfearn, A.

    1992-10-01

    A more streamlined approach is proposed for executing the Remedial Investigation/Feasibility Study Process. This approach recognizes the uncertainties associated with the process, particularly regarding the derivation of human health risk estimates. The approach is tailored for early identification of sites and contaminants of immediate concern, early remediation of such sites, and early identification of low-risk sites that can be eliminated from further investigations. The purpose is to hasten the clean-up process and do so in a cost-effective manner

  5. Comparing primary energy attributed to renewable energy with primary energy equivalent to determine carbon abatement in a national context.

    Science.gov (United States)

    Gallachóir, Brian P O; O'Leary, Fergal; Bazilian, Morgan; Howley, Martin; McKeogh, Eamon J

    2006-01-01

    The current conventional approach to determining the primary energy associated with non-combustible renewable energy (RE) sources such as wind energy and hydro power is to equate the electricity generated from these sources with the primary energy supply. This paper compares this with an approach that was formerly used by the IEA, in which the primary energy equivalent attributed to renewable energy was equated with the fossil fuel energy it displaces. Difficulties with implementing this approach in a meaningful way for international comparisons lead to most international organisations abandoning the primary energy equivalent methodology. It has recently re-emerged in prominence however, as efforts grow to develop baseline procedures for quantifying the greenhouse gas (GHG) emissions avoided by renewable energy within the context of the Kyoto Protocol credit trading mechanisms. This paper discusses the primary energy equivalent approach and in particular the distinctions between displacing fossil fuel energy in existing plant or in new plant. The approach is then extended provide insight into future primary energy displacement by renewable energy and to quantify the amount of CO2 emissions avoided by renewable energy. The usefulness of this approach in quantifying the benefits of renewable energy is also discussed in an energy policy context, with regard to increasing security of energy supply as well as reducing energy-related GHG (and other) emissions. The approach is applied in a national context and Ireland is case study country selected for this research. The choice of Ireland is interesting in two respects. The first relates to the high proportion of electricity only fossil fuel plants in Ireland resulting in a significant variation between primary energy and primary energy equivalent. The second concerns Ireland's poor performance to date in limiting GHG emissions in line with its Kyoto target and points to the need for techniques to quantify the potential

  6. Evaluation of the source area of rooftop scalar measurements in London, UK using wind tunnel and modelling approaches.

    Science.gov (United States)

    Brocklehurst, Aidan; Boon, Alex; Barlow, Janet; Hayden, Paul; Robins, Alan

    2014-05-01

    The source area of an instrument is an estimate of the area of ground over which the measurement is generated. Quantification of the source area of a measurement site provides crucial context for analysis and interpretation of the data. A range of computational models exists to calculate the source area of an instrument, but these are usually based on assumptions which do not hold for instruments positioned very close to the surface, particularly those surrounded by heterogeneous terrain i.e. urban areas. Although positioning instrumentation at higher elevation (i.e. on masts) is ideal in urban areas, this can be costly in terms of installation and maintenance costs and logistically difficult to position instruments in the ideal geographical location. Therefore, in many studies, experimentalists turn to rooftops to position instrumentation. Experimental validations of source area models for these situations are very limited. In this study, a controlled tracer gas experiment was conducted in a wind tunnel based on a 1:200 scale model of a measurement site used in previous experimental work in central London. The detector was set at the location of the rooftop site as the tracer was released at a range of locations within the surrounding streets and rooftops. Concentration measurements are presented for a range of wind angles, with the spread of concentration measurements indicative of the source area distribution. Clear evidence of wind channeling by streets is seen with the shape of the source area strongly influenced by buildings upwind of the measurement point. The results of the wind tunnel study are compared to scalar concentration source areas generated by modelling approaches based on meteorological data from the central London experimental site and used in the interpretation of continuous carbon dioxide (CO2) concentration data. Initial conclusions will be drawn as to how to apply scalar concentration source area models to rooftop measurement sites and

  7. The MyHealthService approach for chronic disease management based on free open source software and low cost components.

    Science.gov (United States)

    Vognild, Lars K; Burkow, Tatjana M; Luque, Luis Fernandez

    2009-01-01

    In this paper we present an approach to building personal health services, supporting following-up, physical exercising, health education, and psychosocial support for the chronically ill, based on free open source software and low-cost computers, mobile devices, and consumer health and fitness devices. We argue that this will lower the cost of the systems, which is important given the increasing number of people with chronicle diseases and limited healthcare budgets.

  8. Biogem: an effective tool based approach for scaling up open source software development in bioinformatics

    NARCIS (Netherlands)

    Bonnal, R.J.P.; Smant, G.; Prins, J.C.P.

    2012-01-01

    Biogem provides a software development environment for the Ruby programming language, which encourages community-based software development for bioinformatics while lowering the barrier to entry and encouraging best practices. Biogem, with its targeted modular and decentralized approach, software

  9. Equivalent statistics and data interpretation.

    Science.gov (United States)

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  10. Feedback equivalence of convolutional codes over finite rings

    Directory of Open Access Journals (Sweden)

    DeCastro-García Noemí

    2017-12-01

    Full Text Available The approach to convolutional codes from the linear systems point of view provides us with effective tools in order to construct convolutional codes with adequate properties that let us use them in many applications. In this work, we have generalized feedback equivalence between families of convolutional codes and linear systems over certain rings, and we show that every locally Brunovsky linear system may be considered as a representation of a code under feedback convolutional equivalence.

  11. «Dead Men Attack» (Osovets, 1915: Archive Sources Approach

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Menjkovsky

    2011-12-01

    Full Text Available The article, basing on archive materials, attempts to examine one of the chapters of World War I history, namely, so-called “dead men attack” during Osovets Tower (westward of Bialystok, within the territory of modern Poland defense by Russian troops in 1915, reconstructs the battle, specifies attack, rather counterattack conditions, introduces new archive sources for scientific use.

  12. General Approach to the Evolution of Singlet Nanoparticles from a Rapidly Quenched Point Source

    NARCIS (Netherlands)

    Feng, J.; Huang, Luyi; Ludvigsson, Linus; Messing, Maria; Maiser, A.; Biskos, G.; Schmidt-Ott, A.

    2016-01-01

    Among the numerous point vapor sources, microsecond-pulsed spark ablation at atmospheric pressure is a versatile and environmentally friendly method for producing ultrapure inorganic nanoparticles ranging from singlets having sizes smaller than 1 nm to larger agglomerated structures. Due to its fast

  13. Modeling geochemical datasets for source apportionment: Comparison of least square regression and inversion approaches.

    Digital Repository Service at National Institute of Oceanography (India)

    Tripathy, G.R.; Das, Anirban.

    used methods, the Least Square Regression (LSR) and Inverse Modeling (IM), to determine the contributions of (i) solutes from different sources to global river water, and (ii) various rocks to a glacial till. The purpose of this exercise is to compare...

  14. An acoustic vector based approach to locate low frequency noise sources in 3D

    NARCIS (Netherlands)

    Bree, H.-E. de; Ostendorf, C.; Basten, T.

    2009-01-01

    Although low frequency noise is an issue of huge societal importance, traditional acoustic testing methods have limitations in finding the low frequency source. It is hard to determine the direction of the noise using traditional microphones. Three dimensional sound probes capturing the particle

  15. Prototype of interactive Web Maps: an approach based on open sources

    Directory of Open Access Journals (Sweden)

    Jürgen Philips

    2004-07-01

    Full Text Available To explore the potentialities available in the World Wide Web (WWW, a prototype with interactive Web map was elaborated using standardized codes and open sources, such as eXtensible Markup Language (XML, Scalable Vector Graphics (SVG, Document Object Model (DOM , script languages ECMAScript/JavaScript and “PHP: Hypertext Preprocessor”, and PostgreSQL and its extension, the PostGIS, to disseminate information related to the urban real estate register. Data from the City Hall of São José - Santa Catarina, were used, referring to Campinas district. Using Client/Server model, a prototype of a Web map with standardized codes and open sources was implemented, allowing a user to visualize Web maps using only the Adobe’s plug-in Viewer 3.0 in his/her browser. Aiming a good cartographic project for the Web, it was obeyed rules of graphical translation and was implemented different functionalities of interaction, like interactive legends, symbolization and dynamic scale. From the results, it can be recommended the use of using standardized codes and open sources in interactive Web mapping projects. It is understood that, with the use of Open Source code, in the public and private administration, the possibility of technological development is amplified, and consequently, a reduction with expenses in the acquisition of computer’s program. Besides, it stimulates the development of computer applications targeting specific demands and requirements.

  16. A practical approach to the classification of IRAS sources using infrared colors alone

    International Nuclear Information System (INIS)

    Walker, H.J.; Volk, K.; Wainscoat, R.J.; Schwartz, D.E.; Cohen, M.

    1989-01-01

    Zones of the IRAS color-color planes in which a variety of different types of known source occur, have been defined for the purpose of obtaining representative IRAS colors for them. There is considerable overlap between many of these zones, rendering a unique classification difficult on the basis of IRAS colors alone, although galactic latitude can resolve ambiguities between galactic and extragalactic populations. The color dependence of these zones on the presence of spectral emission/absorption features and on the spatial extent of the sources has been investigated. It is found that silicate emission features do not significantly influence the IRAS colors. Planetary nebulae may show a dependence of color on the presence of atomic or molecular features in emission, although the dominant cause of this effect may be the underlying red continua of nebulae with strong atomic lines. Only small shifts are detected in the colors of individual spatially extended sources when total flux measurements are substituted for point-source measurements. 36 refs

  17. A systems approach for detecting sources of Phytophthora contamination in nurseries

    Science.gov (United States)

    Jennifer L. Parke; Niklaus Grünwald; Carrie Lewis; Val Fieland

    2010-01-01

    Nursery plants are also important long-distance vectors of non-indigenous pathogens such as P. ramorum and P. kernoviae. Pre-shipment inspections have not been adequate to ensure that shipped plants are free from Phytophthora, nor has this method informed growers about sources of contamination in their...

  18. Assessment of source-receptor relationships of aerosols: An integrated forward and backward modeling approach

    Science.gov (United States)

    Kulkarni, Sarika

    This dissertation presents a scientific framework that facilitates enhanced understanding of aerosol source -- receptor (S/R) relationships and their impact on the local, regional and global air quality by employing a complementary suite of modeling methods. The receptor -- oriented Positive Matrix Factorization (PMF) technique is combined with Potential Source Contribution Function (PSCF), a trajectory ensemble model, to characterize sources influencing the aerosols measured at Gosan, Korea during spring 2001. It is found that the episodic dust events originating from desert regions in East Asia (EA) that mix with pollution along the transit path, have a significant and pervasive impact on the air quality of Gosan. The intercontinental and hemispheric transport of aerosols is analyzed by a series of emission perturbation simulations with the Sulfur Transport and dEposition Model (STEM), a regional scale Chemical Transport Model (CTM), evaluated with observations from the 2008 NASA ARCTAS field campaign. This modeling study shows that pollution transport from regions outside North America (NA) contributed ˜ 30 and 20% to NA sulfate and BC surface concentration. This study also identifies aerosols transported from Europe, NA and EA regions as significant contributors to springtime Arctic sulfate and BC. Trajectory ensemble models are combined with source region tagged tracer model output to identify the source regions and possible instances of quasi-lagrangian sampled air masses during the 2006 NASA INTEX-B field campaign. The impact of specific emission sectors from Asia during the INTEX-B period is studied with the STEM model, identifying residential sector as potential target for emission reduction to combat global warming. The output from the STEM model constrained with satellite derived aerosol optical depth and ground based measurements of single scattering albedo via an optimal interpolation assimilation scheme is combined with the PMF technique to

  19. Rice (Oryza sativa L.) containing the bar gene is compositionally equivalent to the nontransgenic counterpart.

    Science.gov (United States)

    Oberdoerfer, Regina B; Shillito, Raymond D; de Beuckeleer, Marc; Mitten, Donna H

    2005-03-09

    This publication presents an approach to assessing compositional equivalence between grain derived from glufosinate-tolerant rice grain, genetic event LLRICE62, and its nontransgenic counterpart. Rice was grown in the same manner as is common for commercial production, using either conventional weed control practices or glufosinate-ammonium herbicide. A two-season multisite trial design provided a robust data set to evaluate environmental effects between the sites. Statistical comparisons to test for equivalence were made between glufosinate-tolerant rice and a conventional counterpart variety. The key nutrients, carbohydrates, protein, iron, calcium, thiamin, riboflavin, and niacin, for which rice can be the principal dietary source, were investigated. The data demonstrate that rice containing the genetic locus LLRICE62 has the same nutritional value as its nontransgenic counterpart, and most results for nutritional components fall within the range of values reported for rice commodities in commerce.

  20. Equivalent nozzle in thermomechanical problems

    International Nuclear Information System (INIS)

    Cesari, F.

    1977-01-01

    When analyzing nuclear vessels, it is most important to study the behavior of the nozzle cylinder-cylinder intersection. For the elastic field, this analysis in three dimensions is quite easy using the method of finite elements. The same analysis in the non-linear field becomes difficult for designs in 3-D. It is therefore necessary to resolve a nozzle in two dimensions equivalent to a 3-D nozzle. The purpose of the present work is to find an equivalent nozzle both with a mechanical and thermal load. This has been achieved by the analysis in three dimensions of a nozzle and a nozzle cylinder-sphere intersection, of a different radius. The equivalent nozzle will be a nozzle with a sphere radius in a given ratio to the radius of a cylinder; thus, the maximum equivalent stress is the same in both 2-D and 3-D. The nozzle examined derived from the intersection of a cylindrical vessel of radius R=191.4 mm and thickness T=6.7 mm with a cylindrical nozzle of radius r=24.675 mm and thickness t=1.350 mm, for which the experimental results for an internal pressure load are known. The structure was subdivided into 96 finite, three-dimensional and isoparametric elements with 60 degrees of freedom and 661 total nodes. Both the analysis with a mechanical load as well as the analysis with a thermal load were carried out on this structure according to the Bersafe system. The thermal load consisted of a transient typical of an accident occurring in a sodium-cooled fast reactor, with a peak of the temperature (540 0 C) for the sodium inside the vessel with an insulating argon temperature constant at 525 0 C. The maximum value of the equivalent tension was found in the internal area at the union towards the vessel side. The analysis of the nozzle in 2-D consists in schematizing the structure as a cylinder-sphere intersection, where the sphere has a given relation to the

  1. 21 CFR 26.9 - Equivalence determination.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Equivalence determination. 26.9 Section 26.9 Food... Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.9 Equivalence determination... document insufficient evidence of equivalence, lack of opportunity to assess equivalence or a determination...

  2. Information Leakage from Logically Equivalent Frames

    Science.gov (United States)

    Sher, Shlomi; McKenzie, Craig R. M.

    2006-01-01

    Framing effects are said to occur when equivalent frames lead to different choices. However, the equivalence in question has been incompletely conceptualized. In a new normative analysis of framing effects, we complete the conceptualization by introducing the notion of information equivalence. Information equivalence obtains when no…

  3. Wijsman Orlicz Asymptotically Ideal -Statistical Equivalent Sequences

    Directory of Open Access Journals (Sweden)

    Bipan Hazarika

    2013-01-01

    in Wijsman sense and present some definitions which are the natural combination of the definition of asymptotic equivalence, statistical equivalent, -statistical equivalent sequences in Wijsman sense. Finally, we introduce the notion of Cesaro Orlicz asymptotically -equivalent sequences in Wijsman sense and establish their relationship with other classes.

  4. Equivalence relations of AF-algebra extensions

    Indian Academy of Sciences (India)

    In this paper, we consider equivalence relations of *-algebra extensions and describe the relationship between the isomorphism equivalence and the unitary equivalence. We also show that a certain group homomorphism is the obstruction for these equivalence relations to be the same.

  5. Emotional Self-Efficacy, Emotional Empathy and Emotional Approach Coping as Sources of Happiness

    Directory of Open Access Journals (Sweden)

    Tarık Totan

    2013-05-01

    Full Text Available Among the many variables affecting happiness, there are those that arise from emotional factors. In this study, the hypothesis stating that happiness is affected by emotional self-efficacy, emotional empathy and emotional approach coping has been examined using the path model. A total of 334 university students participated in this study, 229 of whom were females and 105 being males. Oxford Happiness Questionnaire-Short Form, Emotional Self-efficacy Scale, Multi-Dimensional Emotional Empathy Scale, The Emotional Approach Coping Scale and personal information form have been used as data acquisition tools. As a result of path analysis, it was determined that the predicted path from emotional empathy to emotional approach coping was insignificant and thus it was taken out of the model. According to the modified path model, it was determined that there is a positive relationship between emotional self- efficacy and emotional empathy, that emotional self-efficacy positively affects emotional approach coping and happiness, that emotional empathy also positively affects happiness and that emotional approach coping also positively affects happiness.

  6. An ESPRIT-Based Approach for 2-D Localization of Incoherently Distributed Sources in Massive MIMO Systems

    Science.gov (United States)

    Hu, Anzhong; Lv, Tiejun; Gao, Hui; Zhang, Zhang; Yang, Shaoshi

    2014-10-01

    In this paper, an approach of estimating signal parameters via rotational invariance technique (ESPRIT) is proposed for two-dimensional (2-D) localization of incoherently distributed (ID) sources in large-scale/massive multiple-input multiple-output (MIMO) systems. The traditional ESPRIT-based methods are valid only for one-dimensional (1-D) localization of the ID sources. By contrast, in the proposed approach the signal subspace is constructed for estimating the nominal azimuth and elevation direction-of-arrivals and the angular spreads. The proposed estimator enjoys closed-form expressions and hence it bypasses the searching over the entire feasible field. Therefore, it imposes significantly lower computational complexity than the conventional 2-D estimation approaches. Our analysis shows that the estimation performance of the proposed approach improves when the large-scale/massive MIMO systems are employed. The approximate Cram\\'{e}r-Rao bound of the proposed estimator for the 2-D localization is also derived. Numerical results demonstrate that albeit the proposed estimation method is comparable with the traditional 2-D estimators in terms of performance, it benefits from a remarkably lower computational complexity.

  7. DrSPINE - New approach to data reduction and analysis for neutron spin echo experiments from pulsed and reactor sources

    International Nuclear Information System (INIS)

    Zolnierczuk, P.A.; Ohl, M.; Holderer, O.; Monkenbusch, M.

    2015-01-01

    Neutron spin echo (NSE) method at a pulsed neutron source presents new challenges to the data reduction and analysis as compared to the instruments installed at reactor sources. The main advantage of the pulsed source NSE is the ability to resolve the neutron wavelength and collect neutrons over a wider bandwidth. This allows us to more precisely determine the symmetry phase and measure the data for several Q-values at the same time. Based on the experience gained at the SNS NSE - the first, and to date the only one, NSE instrument installed at a pulsed spallation source, we propose a novel and unified approach to the NSE data processing called DrSPINE. The goals of the DrSPINE project are: -) exploit better symmetry phase determination due to the broader bandwidth at a pulsed source; -) take advantage of larger Q coverage for TOF instruments; -) use objective statistical criteria to get the echo fits right; -) provide robust reduction with report generation; -) incorporate absolute instrument calibration; and -) allow for background subtraction. The software must be able to read the data from various instruments, perform data integrity, consistency and compatibility checks and combine the data from compatible sets, partial scans, etc. We chose to provide a console-based interface with the ability to process macros (scripts) for batch evaluation. And last and not the least, a good software package has to provide adequate documentation. DrSPINE software is currently under development

  8. Multiplicities of states od equivalent fermion shells

    International Nuclear Information System (INIS)

    Savukinas, A.Yu.; Glembotskij, I.I.

    1980-01-01

    Classification of states of three or four equivalent fermions has been studied, i.e. possible terms and their multiplicities have been determined. For this purpose either the group theory or evident expressions for the fractional-parentage coefficients have been used. In the first approach the formulas obtained by other authors for the multiplicities of terms through the characters of the transformation matrices of bond moments have been used. This approach happens to be more general as compared with the second one, as expressions for the fractional-parentage coefficients in many cases are not known. The multiplicities of separate terms have been determined. It has been shown that the number of terms of any multiplicity becomes constant when l or j is increased [ru

  9. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    Science.gov (United States)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole

  10. Comparison of VSC and Z-Source Converter: Power System Application Approach

    Directory of Open Access Journals (Sweden)

    Masoud Jokar Kouhanjani

    2017-01-01

    Full Text Available Application of equipment with power electronic converter interface such as distributed generation, FACTS and HVDC, is growing up intensively. On the other hand, various types of topologies have been proposed and each of them has some advantages. Therefore, appropriateness of each converter regarding to the application is a main question for designers and engineers. In this paper, a part of this challenge is responded by comparing a typical Voltage-Source Converter (VSC and Z-Source Converter (ZSC, through high power electronic-based equipment used in power systems. Dynamic response, stability margin, Total Harmonic Distortion (THD of grid current and fault tolerant are considered as assessment criteria. In order to meet this evaluation, dynamic models of two converters are presented, a proper control system is designed, a small signal stability method is applied and responses of converters to small and large perturbations are obtained and analysed by PSCAD/EMTDC.

  11. Sources of Law: Approach in the Light of Disciplinary Process Right

    Directory of Open Access Journals (Sweden)

    Alexandre dos Santos Lopes

    2016-10-01

    Full Text Available This article aims to analyze the sources of law that has an correlation with the disciplinary procedural law, especially when you realize the reverberation of principles inflows and axiological values arising from the constitution that procedural species. Calls that outline the sources  of  law  that  are  related  to  this  kind  of  administrative  process,  translates  into significant challenge, insofar as its structure, especially in the new constitutional order (post- positivist allows, starting from the look and constitutional filter, define more precisely the height, feature and densification in the context of the Brazilian legal system, enabling better framing of disciplinary procedural legal relationship.

  12. Double radio sources and the new approach to cosmic plasma physics

    International Nuclear Information System (INIS)

    Alfven, H.

    1977-08-01

    The methodology of cosmic plasma physics is discussed. A summary is given of laboratory investigations of electric double layers, a phenomenon which is known to be very important in laboratory discharges. The importance of electric double layers in the Earth's surrounding is established. The scaling laws between laboratory and magnetospheric double layers are studied. A further extrapolation to galactic phenomena leads to a theory of double radio sources. From analogy with laboratory and magnetospheric current systems it is argued that the galactic current might produce double layers where a large energy dissipation takes place. This leads to a theory of the double radio sources which within the necessary wide limits of uncertainty is quantitatively reconcilable with observations. (author)

  13. A realistic multimodal modeling approach for the evaluation of distributed source analysis: application to sLORETA

    Science.gov (United States)

    Cosandier-Rimélé, D.; Ramantani, G.; Zentner, J.; Schulze-Bonhage, A.; Dümpelmann, M.

    2017-10-01

    Objective. Electrical source localization (ESL) deriving from scalp EEG and, in recent years, from intracranial EEG (iEEG), is an established method in epilepsy surgery workup. We aimed to validate the distributed ESL derived from scalp EEG and iEEG, particularly regarding the spatial extent of the source, using a realistic epileptic spike activity simulator. Approach. ESL was applied to the averaged scalp EEG and iEEG spikes of two patients with drug-resistant structural epilepsy. The ESL results for both patients were used to outline the location and extent of epileptic cortical patches, which served as the basis for designing a spatiotemporal source model. EEG signals for both modalities were then generated for different anatomic locations and spatial extents. ESL was subsequently performed on simulated signals with sLORETA, a commonly used distributed algorithm. ESL accuracy was quantitatively assessed for iEEG and scalp EEG. Main results. The source volume was overestimated by sLORETA at both EEG scales, with the error increasing with source size, particularly for iEEG. For larger sources, ESL accuracy drastically decreased, and reconstruction volumes shifted to the center of the head for iEEG, while remaining stable for scalp EEG. Overall, the mislocalization of the reconstructed source was more pronounced for iEEG. Significance. We present a novel multiscale framework for the evaluation of distributed ESL, based on realistic multiscale EEG simulations. Our findings support that reconstruction results for scalp EEG are often more accurate than for iEEG, owing to the superior 3D coverage of the head. Particularly the iEEG-derived reconstruction results for larger, widespread generators should be treated with caution.

  14. Mixed field dose equivalent measuring instruments

    International Nuclear Information System (INIS)

    Brackenbush, L.W.; McDonald, J.C.; Endres, G.W.R.; Quam, W.

    1985-01-01

    In the past, separate instruments have been used to monitor dose equivalent from neutrons and gamma rays. It has been demonstrated that it is now possible to measure simultaneously neutron and gamma dose with a single instrument, the tissue equivalent proportional counter (TEPC). With appropriate algorithms dose equivalent can also be determined from the TEPC. A simple ''pocket rem meter'' for measuring neutron dose equivalent has already been developed. Improved algorithms for determining dose equivalent for mixed fields are presented. (author)

  15. Safety of radiation sources and security of radioactive materials. A Romanian approach

    International Nuclear Information System (INIS)

    Ghilea, S.; Coroianu, A.I.; Rodna, A.L.

    2001-01-01

    After a brief explanation on the scope of applications of nuclear energy and practices with ionizing radiation in Romania, the report explains the current national infrastructure for radiation safety making reference in particular to the National Commission for Nuclear Activities Control as the regulatory authority for the safety of radiation sources. The report also describes the existing legal framework, provides information on the list of normative acts in force, and on the system of authorization, inspection and enforcement, which operates effectively. (author)

  16. A quantitative approach to the loading rate of seismogenic sources in Italy

    Science.gov (United States)

    Caporali, Alessandro; Braitenberg, Carla; Montone, Paola; Rossi, Giuliana; Valensise, Gianluca; Viganò, Alfio; Zurutuza, Joaquin

    2018-06-01

    To investigate the transfer of elastic energy between a regional stress field and a set of localized faults, we project the stress rate tensor inferred from the Italian GNSS (Global Navigation Satellite Systems) velocity field onto faults selected from the Database of Individual Seismogenic Sources (DISS 3.2.0). For given Lamé constants and friction coefficient, we compute the loading rate on each fault in terms of the Coulomb failure function (CFF) rate. By varying the strike, dip and rake angles around the nominal DISS values, we also estimate the geometry of planes that are optimally oriented for maximal CFF rate. Out of 86 Individual Seismogenic Sources (ISSs), all well covered by GNSS data, 78-81 (depending on the assumed friction coefficient) load energy at a rate of 0-4 kPa yr-1. The faults displaying larger CFF rates (4-6 ± 1 kPa yr-1) are located in the central Apennines and are all characterized by a significant strike-slip component. We also find that the loading rate of 75% of the examined sources is less than 1 kPa yr-1 lower than that of optimally oriented faults. We also analysed 2016 August 24 and October 30 central Apennines earthquakes (Mw 6.0-6.5, respectively). The strike of their causative faults based on seismological and tectonic data and the geodetically inferred strike differ by <30°. Some sources exhibit a strike oblique to the direction of maximum strain rate, suggesting that in some instances the present-day stress acts on inherited faults. The choice of the friction coefficient only marginally affects this result.

  17. A Systematic Approach for Evaluating BPM Systems: Case Studies on Open Source and Proprietary Tools

    OpenAIRE

    Delgado , Andrea; Calegari , Daniel; Milanese , Pablo; Falcon , Renatta; García , Esteban

    2015-01-01

    Part 3: Examples and Case Studies; International audience; Business Process Management Systems (BPMS) provide support for modeling, developing, deploying, executing and evaluating business processes in an organization. Selecting a BPMS is not a trivial task, not only due to the many existing alternatives, both in the open source and proprietary realms, but also because it requires a thorough evaluation of its capabilities, contextualizing them in the organizational environment in which they w...

  18. A quantitative approach to the loading rate of seismogenic sources in Italy

    Science.gov (United States)

    Caporali, Alessandro; Braitenberg, Carla; Montone, Paola; Rossi, Giuliana; Valensise, Gianluca; Viganò, Alfio; Zurutuza, Joaquin

    2018-03-01

    To investigate the transfer of elastic energy between a regional stress field and a set of localized faults we project the stress rate tensor inferred from the Italian GNSS velocity field onto faults selected from the Database of Individual Seismogenic Sources (DISS 3.2.0). For given Lamé constants and friction coefficient we compute the loading rate on each fault in terms of the Coulomb Failure Function (CFF) rate. By varying the strike, dip and rake angles around the nominal DISS values, we also estimate the geometry of planes that are optimally oriented for maximal CFF rate. Out of 86 Individual Seismogenic Sources (ISSs), all well covered by GNSS data, 78 to 81 (depending on the assumed friction coefficient) load energy at a rate of 0-4 kPa/yr. The faults displaying larger CFF rates (4 to 6 ± 1 kPa/yr) are located in the central Apennines and are all characterized by a significant strike-slip component. We also find that the loading rate of 75 per cent of the examined sources is less than 1 kPa/yr lower than that of optimally oriented faults. We also analyzed the 24 August and 30 October 2016, central Apennines earthquakes (Mw 6.0-6.5 respectively). The strike of their causative faults based on seismological and tectonic data and the geodetically inferred strike differ by < 30°. Some sources exhibit a strike oblique to the direction of maximum strain rate, suggesting that in some instances the present-day stress acts on inherited faults. The choice of the friction coefficient only marginally affects this result.

  19. Perception by Operators of Approach and Withdrawal of Moving Sound Sources

    Science.gov (United States)

    1999-01-01

    Tucker, 1988; Strybel and Neal, 1994) or between stationary and moving sound sources or auditory images (Perrott and Musikant , 1977; Strybel and Neale...conditions of stimulation (Viskov, 1975; Perrott and Musikant , 1977; Strybel et al., 1989; Sabery and Perrott, 1990; Strybel et al., 1992; Strybel and...noise and its relation to masking and loudness// JASA, 1947. V.19. P. 609-619. 24. Perrott D.R., Musicant A.D. Minimum auditory movement angle: binaural

  20. Residents’ Preferences for Household Kitchen Waste Source Separation Services in Beijing: A Choice Experiment Approach

    Directory of Open Access Journals (Sweden)

    Yalin Yuan

    2014-12-01

    Full Text Available A source separation program for household kitchen waste has been in place in Beijing since 2010. However, the participation rate of residents is far from satisfactory. This study was carried out to identify residents’ preferences based on an improved management strategy for household kitchen waste source separation. We determine the preferences of residents in an ad hoc sample, according to their age level, for source separation services and their marginal willingness to accept compensation for the service attributes. We used a multinomial logit model to analyze the data, collected from 394 residents in Haidian and Dongcheng districts of Beijing City through a choice experiment. The results show there are differences of preferences on the services attributes between young, middle, and old age residents. Low compensation is not a major factor to promote young and middle age residents accept the proposed separation services. However, on average, most of them prefer services with frequent, evening, plastic bag attributes and without instructor. This study indicates that there is a potential for local government to improve the current separation services accordingly.

  1. Filling Source Feedthrus with Alumina/Molybdenum CND50 Cermet: Experimental, Theoretical, and Computational Approaches

    International Nuclear Information System (INIS)

    STUECKER, JOHN N.; CESARANO III, JOSEPH; CORRAL, ERICA LORRANE; SHOLLENBERGER, KIM ANN; ROACH, R. ALLEN; TORCZYNSKI, JOHN R.; THOMAS, EDWARD V.; VAN ORNUM, DAVID J.

    2001-01-01

    This report is a summary of the work completed in FY00 for science-based characterization of the processes used to fabricate cermet vias in source feedthrus. In particular, studies were completed to characterize the CND50 cermet slurry, characterize solvent imbibition, and identify critical via filling variables. These three areas of interest are important to several processes pertaining to the production of neutron generator tubes. Rheological characterization of CND50 slurry prepared with 94ND2 and Sandi94 primary powders were also compared. The 94ND2 powder was formerly produced at the GE Pinellas Plant and the Sandi94 is the new replacement powder produced at CeramTec. Processing variables that may effect the via-filling process were also studied and include: the effect of solids loading in the CND50 slurry; the effect of milling time; and the effect of Nuosperse (a slurry ''conditioner''). Imbibition characterization included a combination of experimental, theoretical, and computational strategies to determine solvent migration though complex shapes, specifically vias in the source feedthru component. Critical factors were determined using a controlled set of experiments designed to identify those variables that influence the occurrence of defects within the cermet filled via. These efforts were pursued to increase part production reliability, understand selected fundamental issues that impact the production of slurry-filled parts, and validate the ability of the computational fluid dynamics code, GOMA, to simulate these processes. Suggestions are made for improving the slurry filling of source feedthru vias

  2. Assessing heavy metal sources in sugarcane Brazilian soils: an approach using multivariate analysis.

    Science.gov (United States)

    da Silva, Fernando Bruno Vieira; do Nascimento, Clístenes Williams Araújo; Araújo, Paula Renata Muniz; da Silva, Luiz Henrique Vieira; da Silva, Roberto Felipe

    2016-08-01

    Brazil is the world's largest sugarcane producer and soils in the northeastern part of the country have been cultivated with the crop for over 450 years. However, so far, there has been no study on the status of heavy metal accumulation in these long-history cultivated soils. To fill the gap, we collect soil samples from 60 sugarcane fields in order to determine the contents of Cd, Cr, Cu, Ni, Pb, and Zn. We used multivariate analysis to distinguish between natural and anthropogenic sources of these metals in soils. Analytical determinations were performed in ICP-OES after microwave acid solution digestion. Mean concentrations of Cd, Cr, Cu, Ni, Pb, and Zn were 1.9, 18.8, 6.4, 4.9, 11.2, and 16.2 mg kg(-1), respectively. The principal component one was associated with lithogenic origin and comprised the metals Cr, Cu, Ni, and Zn. Cluster analysis confirmed that 68 % of the evaluated sites have soil heavy metal concentrations close to the natural background. The Cd concentration (principal component two) was clearly associated with anthropogenic sources with P fertilization being the most likely source of Cd to soils. On the other hand, the third component (Pb concentration) indicates a mixed origin for this metal (natural and anthropogenic); hence, Pb concentrations are probably related not only to the soil parent material but also to industrial emissions and urbanization in the vicinity of the agricultural areas.

  3. An open source approach to enable the reproducibility of scientific workflows in the ocean sciences

    Science.gov (United States)

    Di Stefano, M.; Fox, P. A.; West, P.; Hare, J. A.; Maffei, A. R.

    2013-12-01

    Every scientist should be able to rerun data analyses conducted by his or her team and regenerate the figures in a paper. However, all too often the correct version of a script goes missing, or the original raw data is filtered by hand and the filtering process is undocumented, or there is lack of collaboration and communication among scientists working in a team. Here we present 3 different use cases in ocean sciences in which end-to-end workflows are tracked. The main tool that is deployed to address these use cases is based on a web application (IPython Notebook) that provides the ability to work on very diverse and heterogeneous data and information sources, providing an effective way to share the and track changes to source code used to generate data products and associated metadata, as well as to track the overall workflow provenance to allow versioned reproducibility of a data product. Use cases selected for this work are: 1) A partial reproduction of the Ecosystem Status Report (ESR) for the Northeast U.S. Continental Shelf Large Marine Ecosystem. Our goal with this use case is to enable not just the traceability but also the reproducibility of this biannual report, keeping track of all the processes behind the generation and validation of time-series and spatial data and information products. An end-to-end workflow with code versioning is developed so that indicators in the report may be traced back to the source datasets. 2) Realtime generation of web pages to be able to visualize one of the environmental indicators from the Ecosystem Advisory for the Northeast Shelf Large Marine Ecosystem web site. 3) Data and visualization integration for ocean climate forecasting. In this use case, we focus on a workflow to describe how to provide access to online data sources in the NetCDF format and other model data, and make use of multicore processing to generate video animation from time series of gridded data. For each use case we show how complete workflows

  4. An Integrated Approach for Visual Analysis of a Multi-Source Moving Objects Knowledge Base

    NARCIS (Netherlands)

    Willems, C.M.E.; van Hage, W.R.; de Vries, G.K.D.; Janssens, J.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  5. An integrated approach for visual analysis of a multi-source moving objects knowledge base

    NARCIS (Netherlands)

    Willems, N.; Hage, van W.R.; Vries, de G.; Janssens, J.H.M.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  6. THE CONTRIBUTION OF GIS IN FLOOD MAPPING: TWO APPROACHES USING OPEN SOURCE GRASS GIS SOFTWARE

    Directory of Open Access Journals (Sweden)

    R. Marzocchi

    2014-01-01

    In this work we present a comparison between the two models mentioned above. We analyse the possibility of integrating these two approaches. We intend to use the 1D model, GIS embedded if possible, to calculate the water surface profile along the river axis and the 2D numerical one to analyse inundation beside the river levees.

  7. Using and Developing Measurement Instruments in Science Education: A Rasch Modeling Approach. Science & Engineering Education Sources

    Science.gov (United States)

    Liu, Xiufeng

    2010-01-01

    This book meets a demand in the science education community for a comprehensive and introductory measurement book in science education. It describes measurement instruments reported in refereed science education research journals, and introduces the Rasch modeling approach to developing measurement instruments in common science assessment domains,…

  8. Sources of Segregation in Social Networks : A Novel Approach Using Facebook

    NARCIS (Netherlands)

    Hofstra, B.; Corten, R.; van Tubergen, F.A.; Ellison, Nicole

    2017-01-01

    Most research on segregation in social networks considers small circles of strong ties, and little is known about segregation among the much larger number of weaker ties. This article proposes a novel approach to the study of these more extended networks, through the use of data on personal ties in

  9. A holistic approach to hydrocarbon source allocation in the subtidal sediments of Prince William Sound, Alaska, embayments

    International Nuclear Information System (INIS)

    Page, D.S.; Bence, A.E.; Burns, W.A.; Boehm, P.D.; Brown, J.S.; Douglas, G.S.

    2002-01-01

    The complex organic geochemistry record in the subtidal sediments of Prince William Sound, Alaska is a result of much industrial and human activity in the region. Recent oil spills and a regional background of natural petroleum hydrocarbons originating from active hydrocarbon systems in the northern Gulf of Alaska also contribute to the geochemical record. Pyrogenic and petrogenic polycyclic aromatic hydrocarbons (PAH) are introduced regularly to the subtidal sediments at sites of past and present human activities including villages, fish hatcheries, fish camps and recreational campsites as well as abandoned settlements, canneries, sawmills and mines. Hydrocarbon contributions are fingerprinted and quantified using a holistic approach where contributions from multiple sources is determined. The approach involves a good understanding of the history of the area to identify potential sources. It also involves extensive collection of representative samples and an accurate quantitative analysis of the source and sediment samples for PAH analytes and chemical biomarker compounds. Total organic carbon (TOC) does not work in restricted embayments because of a constrained least-square algorithm to determine hydrocarbon sources. It has been shown that sources contributing to the natural petrogenic background are present in Prince William Sound. In particular, pyrogenic hydrocarbons such as combustion products of diesel is significant where there was much human activity. In addition, petroleum produced from the Monterey Formation in California is present in Prince William Sound because in the past, oil and asphalt shipped from California was widely used for fuel. Low level residues of weathered Alaskan North Slope crude oil from the Exxon Valdez spill are also still present. 30 refs., 4 tabs., 2 figs

  10. Derived equivalences for group rings

    CERN Document Server

    König, Steffen

    1998-01-01

    A self-contained introduction is given to J. Rickard's Morita theory for derived module categories and its recent applications in representation theory of finite groups. In particular, Broué's conjecture is discussed, giving a structural explanation for relations between the p-modular character table of a finite group and that of its "p-local structure". The book is addressed to researchers or graduate students and can serve as material for a seminar. It surveys the current state of the field, and it also provides a "user's guide" to derived equivalences and tilting complexes. Results and proofs are presented in the generality needed for group theoretic applications.

  11. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    Science.gov (United States)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in

  12. Total Nitrogen Sources of the Three Gorges Reservoir--A Spatio-Temporal Approach.

    Directory of Open Access Journals (Sweden)

    Chunping Ren

    Full Text Available Understanding the spatial and temporal variation of nutrient concentrations, loads, and their distribution from upstream tributaries is important for the management of large lakes and reservoirs. The Three Gorges Dam was built on the Yangtze River in China, the world's third longest river, and impounded the famous Three Gorges Reservoir (TGR. In this study, we analyzed total nitrogen (TN concentrations and inflow data from 2003 till 2010 for the main upstream tributaries of the TGR that contribute about 82% of the TGR's total inflow. We used time series analysis for seasonal decomposition of TN concentrations and used non-parametric statistical tests (Kruskal-Walli H, Mann-Whitney U as well as base flow segmentation to analyze significant spatial and temporal patterns of TN pollution input into the TGR. Our results show that TN concentrations had significant spatial heterogeneity across the study area (Tuo River> Yangtze River> Wu River> Min River> Jialing River>Jinsha River. Furthermore, we derived apparent seasonal changes in three out of five upstream tributaries of the TGR rivers (Kruskal-Walli H ρ = 0.009, 0.030 and 0.029 for Tuo River, Jinsha River and Min River in sequence. TN pollution from non-point sources in the upstream tributaries accounted for 68.9% of the total TN input into the TGR. Non-point source pollution of TN revealed increasing trends for 4 out of five upstream tributaries of the TGR. Land use/cover and soil type were identified as the dominant driving factors for the spatial distribution of TN. Intensifying agriculture and increasing urbanization in the upstream catchments of the TGR were the main driving factors for non-point source pollution of TN increase from 2003 till 2010. Land use and land cover management as well as chemical fertilizer use restriction were needed to overcome the threats of increasing TN pollution.

  13. A traveling wave approach to plasma pumping for X-ray sources

    International Nuclear Information System (INIS)

    Jensen, R.J.

    1989-01-01

    Progress in high-brightness excimer lasers and in optical angular multiplexing of excimer lasers presents an opportunity to provide very intense pumping of X-ray sources, both in favorable geometry and in travelling waves, all at low cost. The traveling-wave strategy can be tailored to the parameters of the system to be pumped. This design option can be of great importance for systems lasing at wavelengths in the kilovolt regime where upper level lifetimes are short, and where mirror technology is presently tenuous. Features of several design strategies are explored. (author)

  14. Small Works, Big Stories. Methodological approaches to photogrammetry through crowd-sourcing experiences

    Directory of Open Access Journals (Sweden)

    Seren Griffiths

    2015-12-01

    Full Text Available A recent digital public archaeology project (HeritageTogether sought to build a series of 3D ditigal models using photogrammetry from crowd-sourced images. The project saw over 13000 digital images being donated, and resulted in models of some 78 sites, providing resources for researchers, and condition surveys. The project demonstrated that digital public archaeology does not stop at the 'trowel's edge', and that collaborative post-excavation analysis and generation of research processes are as important as time in the field. We emphasise in this contribution that our methodologies, as much as our research outputs, can be fruitfully co-produced in public archaeology projects.

  15. Panel cutting method: new approach to generate panels on a hull in Rankine source potential approximation

    Directory of Open Access Journals (Sweden)

    Hee-Jong Choi

    2011-12-01

    Full Text Available In the present study, a new hull panel generation algorithm, namely panel cutting method, was developed to predict flow phenomena around a ship using the Rankine source potential based panel method, where the iterative method was used to satisfy the nonlinear free surface condition and the trim and sinkage of the ship was taken into account. Numerical computations were performed to investigate the validity of the proposed hull panel generation algorithm for Series 60 (CB=0.60 hull and KRISO container ship (KCS, a container ship designed by Maritime and Ocean Engineering Research Institute (MOERI. The computational results were validated by comparing with the existing experimental data.

  16. Panel cutting method: new approach to generate panels on a hull in Rankine source potential approximation

    Science.gov (United States)

    Choi, Hee-Jong; Chun, Ho-Hwan; Park, Il-Ryong; Kim, Jin

    2011-12-01

    In the present study, a new hull panel generation algorithm, namely panel cutting method, was developed to predict flow phenomena around a ship using the Rankine source potential based panel method, where the iterative method was used to satisfy the nonlinear free surface condition and the trim and sinkage of the ship was taken into account. Numerical computations were performed to investigate the validity of the proposed hull panel generation algorithm for Series 60 (CB=0.60) hull and KRISO container ship (KCS), a container ship designed by Maritime and Ocean Engineering Research Institute (MOERI). The computational results were validated by comparing with the existing experimental data.

  17. Current approaches on the management of disused sealed sources in Bulgaria

    International Nuclear Information System (INIS)

    Benitez-Navarro, J. C.; Canizares, J.; Asuar, O.; Tapia, J.; Demireva, E.; Yordanova, O.; Stefanova, I.; Karadzhov, S.

    2005-01-01

    The main options for the safe management of existing Disused Sealed Radioactive Sources (DSRS) in Bulgaria are discussed. The specific installations for handling and conditioning of all type of DSRS are being designed. The necessary equipment and materials for all conditioning operations have been defined. As the final disposal route for the radioactive wastes in Bulgaria is not defined yet, the proposed conditioning process for the DSRS ensures that end-point disposal of DSRS is not jeopardized by actions taken at present. All the DSRS would be packaged in secure, safe, monitorable and retrievable manner for interim storage

  18. Online Qualitative Research in the Age of E-Commerce: Data Sources and Approaches

    Directory of Open Access Journals (Sweden)

    Nikhilesh Dholakia

    2004-05-01

    Full Text Available With the boom in E-commerce, practitioners and researchers are increasingly generating marketing and strategic insights by employing the Internet as an effective new tool for conducting well-established forms of qualitative research (TISCHLER 2004. The potential of Internet as a rich data source and an attractive arena for qualitative research in e-commerce settings—in other words cyberspace as a "field," in the ethnographic sense—has not received adequate attention. This paper explores qualitative research prospects in e-commerce arenas. URN: urn:nbn:de:0114-fqs0402299

  19. Nitrate Sources, Supply, and Phytoplankton Growth in the Great Australian Bight: An Eulerian-Lagrangian Modeling Approach

    Science.gov (United States)

    Cetina-Heredia, Paulina; van Sebille, Erik; Matear, Richard J.; Roughan, Moninya

    2018-02-01

    The Great Australian Bight (GAB), a coastal sea bordered by the Pacific, Southern, and Indian Oceans, sustains one of the largest fisheries in Australia but the geographical origin of nutrients that maintain its productivity is not fully known. We use 12 years of modeled data from a coupled hydrodynamic and biogeochemical model and an Eulerian-Lagrangian approach to quantify nitrate supply to the GAB and the region between the GAB and the Subantarctic Australian Front (GAB-SAFn), identify phytoplankton growth within the GAB, and ascertain the source of nitrate that fuels it. We find that nitrate concentrations have a decorrelation timescale of ˜60 days; since most of the water from surrounding oceans takes longer than 60 days to reach the GAB, 23% and 75% of nitrate used by phytoplankton to grow are sourced within the GAB and from the GAB-SAFn, respectively. Thus, most of the nitrate is recycled locally. Although nitrate concentrations and fluxes into the GAB are greater below 100 m than above, 79% of the nitrate fueling phytoplankton growth is sourced from above 100 m. Our findings suggest that topographical uplift and stratification erosion are key mechanisms delivering nutrients from below the nutricline into the euphotic zone and triggering large phytoplankton growth. We find annual and semiannual periodicities in phytoplankton growth, peaking in the austral spring and autumn when the mixed layer deepens leading to a subsurface maximum of phytoplankton growth. This study highlights the importance of examining phytoplankton growth at depth and the utility of Lagrangian approaches.

  20. Sound source measurement by using a passive sound insulation and a statistical approach

    Science.gov (United States)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  1. Approach Motivation as Incentive Salience: Perceptual Sources of Evidence in Relation to Positive Word Primes

    Science.gov (United States)

    Ode, Scott; Winters, Patricia L.; Robinson, Michael D.

    2012-01-01

    Four experiments (total N = 391) examined predictions derived from a biologically-based incentive salience theory of approach motivation. In all experiments, judgments indicative of enhanced perceptual salience were exaggerated in the context of positive, relative to neutral or negative, stimuli. In Experiments 1 and 2, positive words were judged to be of a larger size (Experiment 1) and led individuals to judge subsequently presented neutral objects as larger in size (Experiment 2). In Experiment 3, similar effects were observed in a mock subliminal presentation paradigm. In Experiment 4, positive word primes were perceived to have been presented for a longer duration of time, again relative to both neutral and negative word primes. Results are discussed in relation to theories of approach motivation, affective priming, and the motivation-perception interface. PMID:21875189

  2. Approach motivation as incentive salience: perceptual sources of evidence in relation to positive word primes.

    Science.gov (United States)

    Ode, Scott; Winters, Patricia L; Robinson, Michael D

    2012-02-01

    Four experiments (total N = 391) examined predictions derived from a biologically based incentive salience theory of approach motivation. In all experiments, judgments indicative of enhanced perceptual salience were exaggerated in the context of positive, relative to neutral or negative, stimuli. In Experiments 1 and 2, positive words were judged to be of a larger size (Experiment 1) and led individuals to judge subsequently presented neutral objects as larger in size (Experiment 2). In Experiment 3, similar effects were observed in a mock subliminal presentation paradigm. In Experiment 4, positive word primes were perceived to have been presented for a longer duration of time, again relative to both neutral and negative word primes. Results are discussed in relation to theories of approach motivation, affective priming, and the motivation-perception interface. PsycINFO Database Record (c) 2012 APA, all rights reserved

  3. Emotional Self-Efficacy, Emotional Empathy and Emotional Approach Coping as Sources of Happiness

    OpenAIRE

    Tarık Totan; Tayfun Doğan; Fatma Sapmaz

    2013-01-01

    Among the many variables affecting happiness, there are those that arise from emotional factors. In this study, the hypothesis stating that happiness is affected by emotional self-efficacy, emotional empathy and emotional approach coping has been examined using the path model. A total of 334 university students participated in this study, 229 of whom were females and 105 being males. Oxford Happiness Questionnaire-Short Form, Emotional Self-efficacy Scale, Multi-Dimensional Emotional Empathy ...

  4. Multi-Scale Approach to Understanding Source-Sink Dynamics of Amphibians

    Science.gov (United States)

    2015-12-01

    spotted salamander, A. maculatum) at Fort Leonard Wood (FLW), Missouri. We used a multi-faceted approach in which we combined ecological , genetic...spotted salamander, A. maculatum) at Fort Leonard Wood , Missouri through a combination of intensive ecological field studies, genetic analyses, and...spatial demographic networks to identify optimal locations for wetland construction and restoration. Ecological Applications. Walls, S. C., Ball, L. C

  5. Approach Motivation as Incentive Salience: Perceptual Sources of Evidence in Relation to Positive Word Primes

    OpenAIRE

    Ode, Scott; Winters, Patricia L.; Robinson, Michael D.

    2011-01-01

    Four experiments (total N = 391) examined predictions derived from a biologically-based incentive salience theory of approach motivation. In all experiments, judgments indicative of enhanced perceptual salience were exaggerated in the context of positive, relative to neutral or negative, stimuli. In Experiments 1 and 2, positive words were judged to be of a larger size (Experiment 1) and led individuals to judge subsequently presented neutral objects as larger in size (Experiment 2). In Exper...

  6. Maximum Power Tracking by VSAS approach for Wind Turbine, Renewable Energy Sources

    Directory of Open Access Journals (Sweden)

    Nacer Kouider Msirdi

    2015-08-01

    Full Text Available This paper gives a review of the most efficient algorithms designed to track the maximum power point (MPP for catching the maximum wind power by a variable speed wind turbine (VSWT. We then design a new maximum power point tracking (MPPT algorithm using the Variable Structure Automatic Systems approach (VSAS. The proposed approachleads efficient algorithms as shown in this paper by the analysis and simulations.

  7. Source mechanism inversion and ground motion modeling of induced earthquakes in Kuwait - A Bayesian approach

    Science.gov (United States)

    Gu, C.; Toksoz, M. N.; Marzouk, Y.; Al-Enezi, A.; Al-Jeri, F.; Buyukozturk, O.

    2016-12-01

    The increasing seismic activity in the regions of oil/gas fields due to fluid injection/extraction and hydraulic fracturing has drawn new attention in both academia and industry. Source mechanism and triggering stress of these induced earthquakes are of great importance for understanding the physics of the seismic processes in reservoirs, and predicting ground motion in the vicinity of oil/gas fields. The induced seismicity data in our study are from Kuwait National Seismic Network (KNSN). Historically, Kuwait has low local seismicity; however, in recent years the KNSN has monitored more and more local earthquakes. Since 1997, the KNSN has recorded more than 1000 earthquakes (Mw Institutions for Seismology (IRIS) and KNSN, and widely felt by people in Kuwait. These earthquakes happen repeatedly in the same locations close to the oil/gas fields in Kuwait (see the uploaded image). The earthquakes are generally small (Mw stress of these earthquakes was calculated based on the source mechanisms results. In addition, we modeled the ground motion in Kuwait due to these local earthquakes. Our results show that most likely these local earthquakes occurred on pre-existing faults and were triggered by oil field activities. These events are generally smaller than Mw 5; however, these events, occurring in the reservoirs, are very shallow with focal depths less than about 4 km. As a result, in Kuwait, where oil fields are close to populated areas, these induced earthquakes could produce ground accelerations high enough to cause damage to local structures without using seismic design criteria.

  8. An Open Source Approach for Modern Teaching Methods: The Interactive TGUI System

    Directory of Open Access Journals (Sweden)

    Gerlinde Dinges

    2011-03-01

    Full Text Available In order to facilitate teaching complex topics in an interactive way, the authors developed a computer-assisted teaching system, a graphical user interface named TGUI (Teaching Graphical User Interface. TGUI was introduced at the beginning of 2009 in the Austrian Journal of Statistics (Dinges and Templ 2009 as being an effective instrument to train and teach staff on mathematical and statistical topics. While the fundamental principles were retained, the current TGUI system has been undergone a complete redesign. The ultimate goal behind the reimplementation was to share the advantages of TGUI and provide teachers and people who need to hold training courses with a strong tool that can enrich their lectures with interactive features. The idea was to go a step beyond the current modular blended-learning systems (see, e.g., Da Rin 2003 or the related teaching techniques of classroom-voting (see, e.g., Cline 2006. In this paper the authors have attempted to exemplify basic idea and concept of TGUI by means of statistics seminars held at Statistics Austria. The powerful open source software R (R Development Core Team 2010a is the backend for TGUI, which can therefore be used to process even complex statistical contents. However, with specifically created contents the interactive TGUI system can be used to support a wide range of courses and topics. The open source R packages TGUICore and TGUITeaching are freely available from the Comprehensive R Archive Network at http://CRAN.R-project.org/.

  9. Physics of the Advanced Plasma Source: a review of recent experimental and modeling approaches

    International Nuclear Information System (INIS)

    Brinkmann, R P; Schröder, B; Lapke, M; Storch, R; Styrnoll, T; Awakowicz, P; Harhausen, J; Foest, R; Hannemann, M; Loffhagen, D; Ohl, A

    2016-01-01

    The Advanced Plasma Source (APS), a gridless hot cathode glow discharge capable of generating an ion beam with an energy of up to 150 eV and a flux of 10 19 s −1 , is a standard industrial tool for the process of plasma ion-assisted deposition (PIAD). This manuscript details the results of recent experimental and modeling work aimed at a physical understanding of the APS. A three-zone model is proposed which consists of (i) the ionization zone (the source itself) where the plasma is very dense, hot, and has a high ionization rate, (ii) the acceleration zone (of  ∼20 cm extension) where a strong outward-directed electric field accelerates the primary ions to a high kinetic energy, and (iii) a drift zone (the rest of the process chamber) where the emerging plasma beam is further modified by resonant charge exchange collisions that neutralize some of the energetic ions and generate, at the same time, a flux of slow ions. (paper)

  10. Investigation of source location determination from Magsat magnetic anomalies: The Euler method approach

    Science.gov (United States)

    Ravat, Dhananjay

    1996-01-01

    The applicability of the Euler method of source location determination was investigated on several model situations pertinent to satellite-data scale situations as well as Magsat data of Europe. Our investigations enabled us to understand the end-member cases for which the Euler method will work with the present satellite magnetic data and also the cases for which the assumptions implicit in the Euler method will not be met by the present satellite magnetic data. These results have been presented in one invited lecture at the Indo-US workshop on Geomagnetism in Studies of the Earth's Interior in August 1994 in Pune, India, and at one presentation at the 21st General Assembly of the IUGG in July 1995 in Boulder, CO. A new method, called Anomaly Attenuation Rate (AAR) Method (based on the Euler method), was developed during this study. This method is scale-independent and is appropriate to locate centroids of semi-compact three dimensional sources of gravity and magnetic anomalies. The method was presented during 1996 Spring AGU meeting and a manuscript describing this method is being prepared for its submission to a high-ranking journal. The grant has resulted in 3 papers and presentations at national and international meetings and one manuscript of a paper (to be submitted shortly to a reputable journal).

  11. Source selection problem of competitive power plants under government intervention: a game theory approach

    Science.gov (United States)

    Mahmoudi, Reza; Hafezalkotob, Ashkan; Makui, Ahmad

    2014-06-01

    Pollution and environmental protection in the present century are extremely significant global problems. Power plants as the largest pollution emitting industry have been the cause of a great deal of scientific researches. The fuel or source type used to generate electricity by the power plants plays an important role in the amount of pollution produced. Governments should take visible actions to promote green fuel. These actions are often called the governmental financial interventions that include legislations such as green subsidiaries and taxes. In this paper, by considering the government role in the competition of two power plants, we propose a game theoretical model that will help the government to determine the optimal taxes and subsidies. The numerical examples demonstrate how government could intervene in a competitive market of electricity to achieve the environmental objectives and how power plants maximize their utilities in each energy source. The results also reveal that the government's taxes and subsidiaries effectively influence the selected fuel types of power plants in the competitive market.

  12. Identification of groundwater nitrate sources in pre-alpine catchments: a multi-tracer approach

    Science.gov (United States)

    Stoewer, Myriam; Stumpp, Christine

    2014-05-01

    Porous aquifers in pre-alpine areas are often used as drinking water resources due to their good water quality status and water yield. Maintaining these resources requires knowledge about possible sources of pollutants and a sustainable management practice in groundwater catchment areas. Of particular interest in agricultural areas, like in pre-alpine regions, is limiting nitrate input as main groundwater pollutant. Therefore, the objective of the presented study is i) to identify main nitrate sources in a pre-alpine groundwater catchment with current low nitrate concentration using stable isotopes of nitrate (d18O and d15N) and ii) to investigate seasonal dynamics of nitrogen compounds. The groundwater catchment areas of four porous aquifers are located in Southern Germany. Most of the land use is organic grassland farming as well as forestry and residential area. Thus, potential sources of nitrate mainly are mineral fertilizer, manure/slurry, leaking sewage system and atmospheric deposition of nitrogen compounds. Monthly freshwater samples (precipitation, river water and groundwater) are analysed for stable isotope of water (d2H, d18O), the concentration of major anions and cations, electrical conductivity, water temperature, pH and oxygen. In addition, isotopic analysis of d18O-NO3- and d15N-NO3- for selected samples is carried out using the denitrifier method. In general, all groundwater samples were oxic (10.0±2.6mg/L) and nitrate concentrations were low (0.2 - 14.6mg/L). The observed nitrate isotope values in the observation area compared to values from local precipitation, sewage, manure and mineral fertilizer as well as to data from literature shows that the nitrate in freshwater samples is of microbial origin. Nitrate derived from ammonium in fertilizers and precipitation as well as from soil nitrogen. It is suggested that a major potential threat to the groundwater quality is ammonia and ammonium at a constant level mainly from agriculture activities as

  13. Development of air equivalent gamma dose monitor

    International Nuclear Information System (INIS)

    Alex, Mary; Bhattacharya, Sadhana; Karpagam, R.; Prasad, D.N.; Jakati, R.K.; Mukhopadhyay, P.K.; Patil, R.K.

    2010-01-01

    The paper describes design and development of air equivalent gamma absorbed dose monitor. The monitor has gamma sensitivity of 84 pA/R/h for 60 Co source. The characterization of the monitor has been done to get energy dependence on gamma sensitivity and response to gamma radiation field from 1 R/hr to 5000 R/hr. The gamma sensitivity in the energy range of 0.06 to 1.25MeV relative to 137 Cs nuclide was within 2.5%. The linearity of the monitor response as a function of gamma field from 10 R/h to 3.8 kR/h was within 6%. The monitor has been designed for its application in harsh environment. It has been successfully qualified to meet environmental requirements of shock. (author)

  14. Morphology-controlled synthesis of ZnS nanostructures via single-source approaches

    International Nuclear Information System (INIS)

    Han, Qiaofeng; Qiang, Fei; Wang, Meijuan; Zhu, Junwu; Lu, Lude; Wang, Xin

    2010-01-01

    ZnS nanoparticles of various morphologies, including hollow or solid spherical, and polyhedral shape, were synthesized from single-source precursor Zn(S 2 COC 2 H 5 ) 2 without using a surfactant or template. The as-prepared samples were characterized by X-ray diffraction, scanning electron microscopy, transmission electron microscopy. The results indicate that ZnS hollow and solid spheres assembled by nanoparticles can be easily generated by the solution phase thermalysis of Zn(S 2 COC 2 H 5 ) 2 at 80 o C using N, N-dimethylformamide (DMF) and ethylene glycol (EG) or water as solvents, respectively, whereas solvothermal process of the same precursor led to ZnS nanoparticles of polyhedral shape with an average size of 120 nm. The optical properties of these ZnS nanostructures were investigated by room-temperature luminescence and UV-vis diffuse reflectance spectra.

  15. The use of alternative energy sources - the best approach to improving environmental situation in Azerbaijan

    International Nuclear Information System (INIS)

    Aliyev, F.G.; Khalilova, H.Kh.; Aliyev, F.F.

    2006-01-01

    Energy supply is essential in the development of Azerbaijan. However, it remains reliant on fossil fuels to supply country's energy demand that leads to the exhaustion of energy resources, while increasing environmental pollution in the region. Analysis of the present situation shows that in order to prevent global disasters we must change the existing energy systems. Azerbaijan must seek new ways of generating energy, which do not sacrifice the natural environment, and which protect the health of the population and which promote sustainable development of the region. International Ecoenergy Academy (IEA) has long been engaged in the development of projects on the use of alternative energy sources. Based on the results of studies we suggested that introduction of modern renewable energy technologies can help reduce the health impacts of air pollution and ecological effects of acid rains, hazards of greenhouse gas emissions and climate changes, while providing people with environmentally clean energy and new job opportunities. (authors)

  16. LAVA: An Open-Source Approach To Designing LAMP (Loop-Mediated Isothermal Amplification DNA Signatures

    Directory of Open Access Journals (Sweden)

    Gardner Shea N

    2011-06-01

    Full Text Available Abstract Background We developed an extendable open-source Loop-mediated isothermal AMPlification (LAMP signature design program called LAVA (LAMP Assay Versatile Analysis. LAVA was created in response to limitations of existing LAMP signature programs. Results LAVA identifies combinations of six primer regions for basic LAMP signatures, or combinations of eight primer regions for LAMP signatures with loop primers, which can be used as LAMP signatures. The identified primers are conserved among target organism sequences. Primer combinations are optimized based on lengths, melting temperatures, and spacing among primer sites. We compare LAMP signature candidates for Staphylococcus aureus created both by LAVA and by PrimerExplorer. We also include signatures from a sample run targeting all strains of Mycobacterium tuberculosis. Conclusions We have designed and demonstrated new software for identifying signature candidates appropriate for LAMP assays. The software is available for download at http://lava-dna.googlecode.com/.

  17. The Multi-Messenger Approach to High-Energy Gamma-Ray Sources

    CERN Document Server

    Paredes, Josep M; Torres, Diego F

    2008-01-01

    This book provides a theoretical and observational overview of the state of the art of gamma-ray astrophysics, and their impact and connection with the physics of cosmic rays and neutrinos. With the aim of shedding new and fresh light on the problem of the nature of the gamma-ray sources, particularly those yet unidentified, this book summarizes contributions to a workshop that continues with the series initiated by the meeting held at Tonantzintla in October 2000, and Hong-Kong in May 2004. This books will be of interest for all active researchers in the field of high energy astrophysics and astroparticle physics, as well as for graduate students entering into the subject.

  18. Source term estimation for small sized HTRs: status and further needs - a german approach

    International Nuclear Information System (INIS)

    Moormann, R.; Schenk, W.; Verfondern, K.

    2000-01-01

    The main results of German studies on source term estimation for small pebble-bed HTRs with their strict safety demands are outlined. Core heat-up events are no longer dominant for modern high quality fuel, but fission product transport during water ingress accidents (steam cycle plants) and depressurization is relevant, mainly due to remobilization of fission products which were plated-out in the course of normal operation or became dust borne. An important lack of knowledge was identified as concerns data on plate-out under normal operation, as well as on the behaviour of dust borne activity as a whole. Improved knowledge in this field is also important for maintenance/repair and design/shielding. For core heat-up events the influence of burn-up on temperature induced fission product release has to be measured for future high burn-up fuel. Also, transport mechanisms out of the He circuit into the environment require further examination. For water/steam ingress events mobilization of plated-out fission products by steam or water has to be considered in detail, along with steam interaction with kernels of particles with defective coatings. For source terms of depressurization, a more detailed knowledge of the flow pattern and shear forces on the various surfaces is necessary. In order to improve the knowledge on plate-out and dust in normal operation and to generate specimens for experimental remobilization studies, planning/design of plate-out/dust examination facilities which could be added to the next generation of HTRs (HTR10,HTTR) is proposed. For severe air ingress and reactivity accidents, behaviour of future advanced fuel elements has to be experimentally tested. (authors)

  19. A geochemical approach to determine sources and movement of saline groundwater in a coastal aquifer.

    Science.gov (United States)

    Anders, Robert; Mendez, Gregory O; Futa, Kiyoto; Danskin, Wesley R

    2014-01-01

    Geochemical evaluation of the sources and movement of saline groundwater in coastal aquifers can aid in the initial mapping of the subsurface when geological information is unavailable. Chloride concentrations of groundwater in a coastal aquifer near San Diego, California, range from about 57 to 39,400 mg/L. On the basis of relative proportions of major-ions, the chemical composition is classified as Na-Ca-Cl-SO4, Na-Cl, or Na-Ca-Cl type water. δ(2)H and δ(18)O values range from -47.7‰ to -12.8‰ and from -7.0‰ to -1.2‰, respectively. The isotopically depleted groundwater occurs in the deeper part of the coastal aquifer, and the isotopically enriched groundwater occurs in zones of sea water intrusion. (87)Sr/(86)Sr ratios range from about 0.7050 to 0.7090, and differ between shallower and deeper flow paths in the coastal aquifer. (3)H and (14)C analyses indicate that most of the groundwater was recharged many thousands of years ago. The analysis of multiple chemical and isotopic tracers indicates that the sources and movement of saline groundwater in the San Diego coastal aquifer are dominated by: (1) recharge of local precipitation in relatively shallow parts of the flow system; (2) regional flow of recharge of higher-elevation precipitation along deep flow paths that freshen a previously saline aquifer; and (3) intrusion of sea water that entered the aquifer primarily during premodern times. Two northwest-to-southeast trending sections show the spatial distribution of the different geochemical groups and suggest the subsurface in the coastal aquifer can be separated into two predominant hydrostratigraphic layers. © 2013, National Ground Water Association.

  20. Using a dual isotopic approach to trace sources and mixing of sulphate in Changjiang Estuary, China

    International Nuclear Information System (INIS)

    Li Siliang; Liu Congqiang; Patra, Sivaji; Wang Fushun; Wang Baoli; Yue Fujun

    2011-01-01

    Highlights: → Changjiang Estuary plays an important role in transportation of the water and solute. → The dual isotopic method could be used to understand sulfate biogeochemistry in estuaries. → Mixing processes should be a major factor involved in the distribution of water and sulphate. → Sulphate in the Changjiang River mainly derived from atmospheric deposition, evaporite dissolution and sulphide oxidation. - Abstract: The dual isotopic compositions of dissolved SO 4 2- in aquatic systems are commonly used to ascertain SO 4 2- sources and possible biogeochemical processes. In this study, the physical parameters, major anions and isotopic compositions of SO 4 2- in water samples from Changjiang River (Nanjin) to the East Sea in Changjiang Estuary were determined. The salinity ranged from 0 per mille to 32.3 per mille in the estuary water samples. The Cl - ,SO 4 2- concentrations and δ 18 O-H 2 O values followed the salinity variations from freshwater to seawater, which indicated that mixing processes might be a major factor involved in the distribution of water and solutes. The contents and isotopic compositions of SO 4 2- suggested that atmospheric deposition, evaporite dissolution and sulphide oxidation were the major sources of dissolved SO 4 2- in the freshwater of Changjiang River. In addition, the mixing model calculated by contents and isotopic compositions of SO 4 2- indicated that the mixing of freshwater and sea water was the major factor involved in SO 4 2- distribution in Changjiang Estuary. However, slightly elevated δ 18 O-SO 4 values were observed in the turbidity maximum zone, which suggested that biological processes might affect the O isotopic compositions of SO 4 2- there.

  1. An artificial neural network approach to reconstruct the source term of a nuclear accident

    International Nuclear Information System (INIS)

    Giles, J.; Palma, C. R.; Weller, P.

    1997-01-01

    This work makes use of one of the main features of artificial neural networks, which is their ability to 'learn' from sets of known input and output data. Indeed, a trained artificial neural network can be used to make predictions on the input data when the output is known, and this feedback process enables one to reconstruct the source term from field observations. With this aim, an artificial neural networks has been trained, using the projections of a segmented plume atmospheric dispersion model at fixed points, simulating a set of gamma detectors located outside the perimeter of a nuclear facility. The resulting set of artificial neural networks was used to determine the release fraction and rate for each of the noble gases, iodines and particulate fission products that could originate from a nuclear accident. Model projections were made using a large data set consisting of effective release height, release fraction of noble gases, iodines and particulate fission products, atmospheric stability, wind speed and wind direction. The model computed nuclide-specific gamma dose rates. The locations of the detectors were chosen taking into account both building shine and wake effects, and varied in distance between 800 and 1200 m from the reactor.The inputs to the artificial neural networks consisted of the measurements from the detector array, atmospheric stability, wind speed and wind direction; the outputs comprised a set of release fractions and heights. Once trained, the artificial neural networks was used to reconstruct the source term from the detector responses for data sets not used in training. The preliminary results are encouraging and show that the noble gases and particulate fission product release fractions are well determined

  2. A Synergistic Approach to Human Rights and Public Health Ethics: Effective or a Source of Conflict?

    Directory of Open Access Journals (Sweden)

    Steinmetz-Wood, Madeleine

    2014-12-01

    Full Text Available Concerns over the growing disparities in health and wealth between members of society incited Stephanie Nixon and Lisa Forman, in their 2008 article Exploring synergies between human rights and public health ethics: A whole greater than the sum of its parts, to propose that the principles of human rights and public health ethics should be used in combination to develop norms for health action. This commentary reflects on the benefits as well as the difficulties that could arise from taking such an approach.

  3. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John

    and the hydraulic gradient across the control plane and are consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox...... transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. Tests show that the decoupled approach is both efficient and able to provide accurate uncertainty...

  4. Effective dose equivalents from external radiation due to Chernobyl accident

    International Nuclear Information System (INIS)

    Erkin, V.G.; Debedev, O.V.; Balonov, M.I.; Parkhomenko, V.I.

    1992-01-01

    Summarized data on measurements of individual dose of external γ-sources in 1987-1990 of population of western areas of Bryansk region were presented. Type of distribution of effective dose equivalent, its significance for various professional and social groups of population depending on the type of the house was discussed. Dependences connecting surface soil activity in the populated locality with average dose of external radiation sources were presented. Tendency of dose variation in 1987-1990 was shown

  5. Nitrogen Source Inventory and Loading Tool: An integrated approach toward restoration of water-quality impaired karst springs.

    Science.gov (United States)

    Eller, Kirstin T; Katz, Brian G

    2017-07-01

    Nitrogen (N) from anthropogenic sources has contaminated groundwater used as drinking water in addition to impairing water quality and ecosystem health of karst springs. The Nitrogen Source Inventory and Loading Tool (NSILT) was developed as an ArcGIS and spreadsheet-based approach that provides spatial estimates of current nitrogen (N) inputs to the land surface and loads to groundwater from nonpoint and point sources within the groundwater contributing area. The NSILT involves a three-step approach where local and regional land use practices and N sources are evaluated to: (1) estimate N input to the land surface, (2) quantify subsurface environmental attenuation, and (3) assess regional recharge to the aquifer. NSILT was used to assess nitrogen loading to groundwater in two karst spring areas in west-central Florida: Rainbow Springs (RS) and Kings Bay (KB). The karstic Upper Floridan aquifer (UFA) is the source of water discharging to the springs in both areas. In the KB study area (predominantly urban land use), septic systems and urban fertilizers contribute 48% and 22%, respectively, of the estimated total annual N load to groundwater 294,400 kg-N/yr. In contrast for the RS study area (predominantly agricultural land use), livestock operations and crop fertilizers contribute 50% and 13%, respectively, of the estimated N load to groundwater. Using overall groundwater N loading rates for the KB and RS study areas, 4.4 and 3.3 kg N/ha, respectively, and spatial recharge rates, the calculated groundwater nitrate-N concentration (2.1 mg/L) agreed closely with the median nitrate-N concentration (1.7 mg/L) from groundwater samples in agricultural land use areas in the RS study area for the period 2010-2014. NSILT results provide critical information for prioritizing and designing restoration efforts for water-quality impaired springs and spring runs affected by multiple sources of nitrogen loading to groundwater. The calculated groundwater N concentration for

  6. Tissue equivalence in neutron dosimetry

    International Nuclear Information System (INIS)

    Nutton, D.H.; Harris, S.J.

    1980-01-01

    A brief review is presented of the essential features of neutron tissue equivalence for radiotherapy and gives the results of a computation of relative absorbed dose for 14 MeV neutrons, using various tissue models. It is concluded that for the Bragg-Gray equation for ionometric dosimetry it is not sufficient to define the value of W to high accuracy and that it is essential that, for dosimetric measurements to be applicable to real body tissue to an accuracy of better than several per cent, a correction to the total absorbed dose must be made according to the test and tissue atomic composition, although variations in patient anatomy and other radiotherapy parameters will often limit the benefits of such detailed dosimetry. (U.K.)

  7. Fast-grown CdS quantum dots: Single-source precursor approach vs microwave route

    Energy Technology Data Exchange (ETDEWEB)

    Fregnaux, Mathieu [Laboratoire de Chimie et Physique: Approche Multi-échelles des Milieux Complexes, Institut Jean Barriol, Université de Lorraine, 1 Boulevard Arago, 57070 Metz (France); Dalmasso, Stéphane, E-mail: stephane.dalmasso@univ-lorraine.fr [Laboratoire de Chimie et Physique: Approche Multi-échelles des Milieux Complexes, Institut Jean Barriol, Université de Lorraine, 1 Boulevard Arago, 57070 Metz (France); Durand, Pierrick [Laboratoire de Cristallographie, Résonance Magnétique et Modélisations, Institut Jean Barriol, Université de Lorraine, UMR CNRS 7036, Faculté des Sciences, BP 70239, 54506 Vandoeuvre lès Nancy (France); Zhang, Yudong [Laboratoire d' Etude des Microstructures et de Mécanique des Matériaux, Université de Lorraine, UMR CNRS 7239, Ile du Saulcy, 57045 Metz cedex 01 (France); Gaumet, Jean-Jacques; Laurenti, Jean-Pierre [Laboratoire de Chimie et Physique: Approche Multi-échelles des Milieux Complexes, Institut Jean Barriol, Université de Lorraine, 1 Boulevard Arago, 57070 Metz (France)

    2013-10-01

    A cross-disciplinary protocol of characterization by joint techniques enables one to closely compare chemical and physical properties of CdS quantum dots (QDs) grown by single source precursor methodology (SSPM) or by microwave synthetic route (MWSR). The results are discussed in relation with the synthesis protocols. The QD average sizes, reproducible as a function of the temperatures involved in the growth processes, range complementarily in 2.8–4.5 nm and 4.5–5.2 nm for SSPM and MWSR, respectively. Hexagonal and cubic structures after X-ray diffraction on SSPM and MWSR grown CdS QDs, respectively, are tentatively correlated to a better crystalline quality of the latter with respect to the further ones, suggested by (i) a remarkable stability of the MWSR grown QDs after exposure to air during several days and (ii) no evidence of their fragmentation during mass spectrometry (MS) analyses, after a fair agreement between size dispersities obtained by transmission electron microscopy (TEM) and MS, in contrast with the discrepancy found for the SSPM grown QDs. Correlatively, a better optical quality is suggested for the MWSR grown QDs by the resolution of n > 1 excitonic transitions in their absorption spectra. The QD average sizes obtained by TEM and deduced from MS are in overall agreement. This agreement is improved for the MWSR grown QDs, taking into account a prolate shape of the QDs also observed in the TEM images. For both series of samples, the excitonic responses vs the average sizes are consistent with the commonly admitted empirical energy-size correspondence. A low energy PL band is observed in the case of the SSPM grown QDs. Its decrease in intensity with QD size increase suggests a surface origin tentatively attributed to S vacancies. In the case of the MWSR grown QDs, the absence of this PL is tentatively correlated to an absence of S vacancies and therefore to the stable behavior observed when the QDs are exposed to air. - Highlights: • Single

  8. Expanding the Interaction Equivalency Theorem

    Directory of Open Access Journals (Sweden)

    Brenda Cecilia Padilla Rodriguez

    2015-06-01

    Full Text Available Although interaction is recognised as a key element for learning, its incorporation in online courses can be challenging. The interaction equivalency theorem provides guidelines: Meaningful learning can be supported as long as one of three types of interactions (learner-content, learner-teacher and learner-learner is present at a high level. This study sought to apply this theorem to the corporate sector, and to expand it to include other indicators of course effectiveness: satisfaction, knowledge transfer, business results and return on expectations. A large Mexican organisation participated in this research, with 146 learners, 30 teachers and 3 academic assistants. Three versions of an online course were designed, each emphasising a different type of interaction. Data were collected through surveys, exams, observations, activity logs, think aloud protocols and sales records. All course versions yielded high levels of effectiveness, in terms of satisfaction, learning and return on expectations. Yet, course design did not dictate the types of interactions in which students engaged within the courses. Findings suggest that the interaction equivalency theorem can be reformulated as follows: In corporate settings, an online course can be effective in terms of satisfaction, learning, knowledge transfer, business results and return on expectations, as long as (a at least one of three types of interaction (learner-content, learner-teacher or learner-learner features prominently in the design of the course, and (b course delivery is consistent with the chosen type of interaction. Focusing on only one type of interaction carries a high risk of confusion, disengagement or missed learning opportunities, which can be managed by incorporating other forms of interactions.

  9. Sustainability index approach as a selection criteria for energy storage system of an intermittent renewable energy source

    International Nuclear Information System (INIS)

    Raza, Syed Shabbar; Janajreh, Isam; Ghenai, Chaouki

    2014-01-01

    Highlights: • Three renewable energy storage options considered: lead acid and lithium polymer batteries and fuel cell. • Hydrogen fuel cell system is the most feasible energy storage option for the long term energy storage. • Sustainability index approach is a novel method used to quantify the qualitative properties of the system. - Abstract: The sustainability index is an adaptive, multicriteria and novel technique that is used to compare different energy storage systems for their sustainability. This innovative concept utilizes both qualitative and quantitative results to measure sustainability through an index based approach. This report aims to compare three different energy storage options for an intermittent renewable energy source. The three energy storage options are lead acid batteries, lithium polymer batteries and fuel cell systems, that are selected due to their availability and the geographical constrain of using other energy storage options. The renewable energy source used is solar photovoltaic (PV). Several technical, economic and environmental factors have been discussed elaborately which would help us to evaluate the merits of the energy storage system for long term storage. Finally, a novel sustainability index has been proposed which quantifies the qualitative and quantitative aspects of the factors discussed, and thus helps us choose the ideal energy storage system for our scenario. A weighted sum approach is used to quantify each factor according to their importance. After a detailed analysis of the three energy storage systems through the sustainability index approach, the most feasible energy storage option was found to be fuel cell systems which can provide a long term energy storage option and also environmental friendly

  10. Costa Rica as a source of emigrants: a reading from a political economy approach

    Directory of Open Access Journals (Sweden)

    Gustavo Gatica López

    2017-04-01

    Full Text Available Available data shows an increase in international migration departing from Costa Rica, mainly to the United States. Based on the data obtained from two surveys conducted with potential emigrants and families with members living abroad, this paper is aimed at understanding their reasons for emigrating. In addition, some socio-economic impacts in four suburbs with high rates of emigration are identified. From a political economy approach, the most appropriate framework to better understand these emigration cases is discussed.  Moreover, the transformation of the employment and productive matrix followed by Costa Rica during the last three decades, as well as the country’s form of insertion into the international economy are two structural factors strongly linked to the emigration of the subjects studied in this paper.

  11. Voltage control in Z-source inverter using low cost microcontroller for undergraduate approach

    Science.gov (United States)

    Zulkifli, Shamsul Aizam; Sewang, Mohd Rizal; Salimin, Suriana; Shah, Noor Mazliza Badrul

    2017-09-01

    This paper is focussing on controlling the output voltage of Z-Source Inverter (ZSI) using a low cost microcontroller with MATLAB-Simulink that has been used for interfacing the voltage control at the output of ZSI. The key advantage of this system is the ability of a low cost microcontroller to process the voltage control blocks based on the mathematical equations created in MATLAB-Simulink. The Proportional Integral (PI) control equations are been applied and then, been downloaded to the microcontroller for observing the changes on the voltage output regarding to the changes on the reference on the PI. The system has been simulated in MATLAB and been verified with the hardware setup. As the results, the Raspberry Pi and Arduino that have been used in this work are able to respond well when there is a change of ZSI output. It proofed that, by applying/introducing this method to student in undergraduate level, it will help the student to understand more on the process of the power converter combine with a control feedback function that can be applied at low cost microcontroller.

  12. Development of an expert system for tsunami warning: a unit source approach

    International Nuclear Information System (INIS)

    Roshan, A.D.; Pisharady, Ajai S.; Bishnoi, L.R.; Shah, Meet

    2015-01-01

    Coastal region of India has been experiencing tsunamis since historical times. Many nuclear facilities including nuclear power plants (NPPs), located along the coast are thus exposed to the hazards of tsunami. For the safety of these facilities as well as the safety of the citizens it is necessary to predict the possibility of occurrence of tsunamis for a recorded earthquake event and evaluate the tsunami hazard posed by the earthquake. To address these concerns, this work aims to design an expert system for Tsunami Warning for the Indian Coast with emphasis on evaluation of tsunami heights and arrival times at various nuclear facility sites. The expert system identifies possibility or otherwise of a tsunamigenic event based on earthquake data inputs. Rupture parameters are worked out for the event and unit tsunami source estimations which are available as precomputed database are combined appropriately to estimate the wave heights and time of arrivals at desired locations along the coast. The system also predicts tsunami wave heights at some pre-defined locations such as Nuclear Power Plant (NPP) and other nuclear facility sites. Time of arrivals of first wave along Indian coast is also evaluated

  13. Final Technical Report for Alternative Fuel Source Study-An Energy Efficient and Environmentally Friendly Approach

    Energy Technology Data Exchange (ETDEWEB)

    Zee, Ralph [Auburn University, AL (United States); Schindler, Anton [Auburn University, AL (United States); Duke, Steve [Auburn University, AL (United States); Burch, Thom [Auburn University, AL (United States); Bransby, David [Auburn University, AL (United States); Stafford, Don [Lafarge North America, Inc., Alpharetta, GA (United States)

    2010-08-31

    The objective of this project is to conduct research to determine the feasibility of using alternate fuel sources for the production of cement. Successful completion of this project will also be beneficial to other commercial processes that are highly energy intensive. During this report period, we have completed all the subtasks in the preliminary survey. Literature searches focused on the types of alternative fuels currently used in the cement industry around the world. Information was obtained on the effects of particular alternative fuels on the clinker/cement product and on cement plant emissions. Federal regulations involving use of waste fuels were examined. Information was also obtained about the trace elements likely to be found in alternative fuels, coal, and raw feeds, as well as the effects of various trace elements introduced into system at the feed or fuel stage on the kiln process, the clinker/cement product, and concrete made from the cement. The experimental part of this project involves the feasibility of a variety of alternative materials mainly commercial wastes to substitute for coal in an industrial cement kiln in Lafarge NA and validation of the experimental results with energy conversion consideration.

  14. A 3D modeling approach to complex faults with multi-source data

    Science.gov (United States)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  15. Nonnegative Tensor Factorization Approach Applied to Fission Chamber’s Output Signals Blind Source Separation

    Science.gov (United States)

    Laassiri, M.; Hamzaoui, E.-M.; Cherkaoui El Moursli, R.

    2018-02-01

    Inside nuclear reactors, gamma-rays emitted from nuclei together with the neutrons introduce unwanted backgrounds in neutron spectra. For this reason, powerful extraction methods are needed to extract useful neutron signal from recorded mixture and thus to obtain clearer neutron flux spectrum. Actually, several techniques have been developed to discriminate between neutrons and gamma-rays in a mixed radiation field. Most of these techniques, tackle using analogue discrimination methods. Others propose to use some organic scintillators to achieve the discrimination task. Recently, systems based on digital signal processors are commercially available to replace the analog systems. As alternative to these systems, we aim in this work to verify the feasibility of using a Nonnegative Tensor Factorization (NTF) to blind extract neutron component from mixture signals recorded at the output of fission chamber (WL-7657). This last have been simulated through the Geant4 linked to Garfield++ using a 252Cf neutron source. To achieve our objective of obtaining the best possible neutron-gamma discrimination, we have applied the two different NTF algorithms, which have been found to be the best methods that allow us to analyse this kind of nuclear data.

  16. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Science.gov (United States)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-11-01

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  17. Noise Sources, Effects and Countermeasures in Narrowband Power-Line Communications Networks: A Practical Approach

    Directory of Open Access Journals (Sweden)

    Gregorio López

    2017-08-01

    Full Text Available The integration of Distributed Generation, Electric Vehicles, and storage without compromising the quality of the power delivery requires the deployment of a communications overlay that allows monitoring and controlling low voltage networks in almost real time. Power Line Communications are gaining momentum for this purpose since they present a great trade-off between economic and technical features. However, the power lines also represent a harsh communications medium which presents different problems such as noise, which is indeed affected by Distributed Generation, Electric Vehicles, and storage. This paper provides a comprehensive overview of the types of noise that affects Narrowband Power Line Communications, including normative noises, noises coming from common electronic devices measured in actual operational power distribution networks, and noises coming from photovoltaic inverters and electric vehicle charging spots measured in a controlled environment. The paper also reviews several techniques to mitigate the effects of noise, paying special attention to passive filtering, as for being one of the most widely used solution to avoid this kind of problems in the field. In addition, the paper presents a set of tests carried out to evaluate the impact of some representative noises on Narrowband Power Line Communications network performance, as well as the effectiveness of different passive filter configurations to mitigate such an impact. In addition, the considered sources of noise can also bring value to further improve PLC communications in the new scenarios of the Smart Grid as an input to theoretical models or simulations.

  18. Approach and organisation of radiation sources safety and security of installations

    International Nuclear Information System (INIS)

    Al-Hilali, S.

    1998-01-01

    Development of application of techniques using radiation sources was fact in all public domains. Although all these techniques were meant for so called peaceful uses they should respect safety regulations in order to ensure safety of personnel, public and the environment. Security system for installations adopted by CNESTEN is based on establishing administrative and technical protection of the installation against external or internal aggression on one hand and protection of the environment by confining radioactivity, on the other hand. Application of Methode Organisee et Systematique d'analyse de risque (MOSAR) at the installations of CNESTEN showed weak points in order to define barriers for prevention and means necessary for management and dealing with accidental situations. At the first stage this attitude was limited to qualitative considerations adopting macroscopic analysis of each installation. Experience obtained from operation of these installation, which have started operation hardly a few months ago, would establish a real database indispensable for complete risk analysis including quantification of possible risks

  19. Open source software in a practical approach for post processing of radiologic images.

    Science.gov (United States)

    Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea

    2015-03-01

    The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.

  20. A new approach to the method of source-sink potentials for molecular conduction

    Energy Technology Data Exchange (ETDEWEB)

    Pickup, Barry T., E-mail: B.T.Pickup@sheffield.ac.uk, E-mail: P.W.Fowler@sheffield.ac.uk; Fowler, Patrick W., E-mail: B.T.Pickup@sheffield.ac.uk, E-mail: P.W.Fowler@sheffield.ac.uk; Borg, Martha [Department of Chemistry, University of Sheffield, Sheffield S3 7HF (United Kingdom); Sciriha, Irene [Department of Mathematics, University of Malta, Msida (Malta)

    2015-11-21

    We re-derive the tight-binding source-sink potential (SSP) equations for ballistic conduction through conjugated molecular structures in a form that avoids singularities. This enables derivation of new results for families of molecular devices in terms of eigenvectors and eigenvalues of the adjacency matrix of the molecular graph. In particular, we define the transmission of electrons through individual molecular orbitals (MO) and through MO shells. We make explicit the behaviour of the total current and individual MO and shell currents at molecular eigenvalues. A rich variety of behaviour is found. A SSP device has specific insulation or conduction at an eigenvalue of the molecular graph (a root of the characteristic polynomial) according to the multiplicities of that value in the spectra of four defined device polynomials. Conduction near eigenvalues is dominated by the transmission curves of nearby shells. A shell may be inert or active. An inert shell does not conduct at any energy, not even at its own eigenvalue. Conduction may occur at the eigenvalue of an inert shell, but is then carried entirely by other shells. If a shell is active, it carries all conduction at its own eigenvalue. For bipartite molecular graphs (alternant molecules), orbital conduction properties are governed by a pairing theorem. Inertness of shells for families such as chains and rings is predicted by selection rules based on node counting and degeneracy.

  1. The technical approach: The IAEA action plan on the safety of radiation sources

    International Nuclear Information System (INIS)

    Bilbao, A.; Wrixon, A.; Ortiz-Lopez, P.

    2001-01-01

    As part of the measures to strengthen international co-operation in nuclear, radiation and waste safety, the report refers to the implementation of the Action Plan for the Safety of Radiation Sources and the Security of Radioactive Materials. Starting with background information, the report references the main results of the Dijon Conference and of General Conference resolution GC(42)/RES/12 in September 1998, describing the actions taken by the Secretariat pursuant such resolution and also by the Board of Governors, in its sessions of March and September 1999, as well as by the General Conference, in October 1999 when by resolution GC(43)/RES/10 the Action Plan was endorsed and the Secretariat was urged to implement it. Finally, the report provides information on the status of implementation of the seven areas covered by the Action Plan and on the suggested further actions to be carried out for its implementation taking into account the decisions of the Board in its meeting of 11 September 2000 and the resolutions GC(44)/RES/11, GC(44)/RES/13 and GC(44)/RES/16 of the forty-fourth regular session of the General Conference. (author)

  2. Modeling the Galaxy-Halo Connection: An open-source approach with Halotools

    Science.gov (United States)

    Hearin, Andrew

    2016-03-01

    Although the modern form of galaxy-halo modeling has been in place for over ten years, there exists no common code base for carrying out large-scale structure calculations. Considering, for example, the advances in CMB science made possible by Boltzmann-solvers such as CMBFast, CAMB and CLASS, there are clear precedents for how theorists working in a well-defined subfield can mutually benefit from such a code base. Motivated by these and other examples, I present Halotools: an open-source, object-oriented python package for building and testing models of the galaxy-halo connection. Halotools is community-driven, and already includes contributions from over a dozen scientists spread across numerous universities. Designed with high-speed performance in mind, the package generates mock observations of synthetic galaxy populations with sufficient speed to conduct expansive MCMC likelihood analyses over a diverse and highly customizable set of models. The package includes an automated test suite and extensive web-hosted documentation and tutorials (halotools.readthedocs.org). I conclude the talk by describing how Halotools can be used to analyze existing datasets to obtain robust and novel constraints on galaxy evolution models, and by outlining the Halotools program to prepare the field of cosmology for the arrival of Stage IV dark energy experiments.

  3. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Energy Technology Data Exchange (ETDEWEB)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-10-18

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy–galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  4. Equivalent damage of loads on pavements

    CSIR Research Space (South Africa)

    Prozzi, JA

    2009-05-26

    Full Text Available This report describes a new methodology for the determination of Equivalent Damage Factors (EDFs) of vehicles with multiple axle and wheel configurations on pavements. The basic premise of this new procedure is that "equivalent pavement response...

  5. Investigation of Equivalent Circuit for PEMFC Assessment

    International Nuclear Information System (INIS)

    Myong, Kwang Jae

    2011-01-01

    Chemical reactions occurring in a PEMFC are dominated by the physical conditions and interface properties, and the reactions are expressed in terms of impedance. The performance of a PEMFC can be simply diagnosed by examining the impedance because impedance characteristics can be expressed by an equivalent electrical circuit. In this study, the characteristics of a PEMFC are assessed using the AC impedance and various equivalent circuits such as a simple equivalent circuit, equivalent circuit with a CPE, equivalent circuit with two RCs, and equivalent circuit with two CPEs. It was found in this study that the characteristics of a PEMFC could be assessed using impedance and an equivalent circuit, and the accuracy was highest for an equivalent circuit with two CPEs

  6. 46 CFR 175.540 - Equivalents.

    Science.gov (United States)

    2010-10-01

    ... Safety Management (ISM) Code (IMO Resolution A.741(18)) for the purpose of determining that an equivalent... Organization (IMO) “Code of Safety for High Speed Craft” as an equivalent to compliance with applicable...

  7. Disposal of disused sealed sources and approach for safety assessment of near surface disposal facilities (national practice of Ukraine)

    International Nuclear Information System (INIS)

    Alekseeva, Z.; Letuchy, A.; Tkachenko, N.V.

    2003-01-01

    The main sources of wastes are 13 units of nuclear power plants under operation at 4 NPP sites (operational wastes and spent sealed sources), uranium-mining industry, area of Chernobyl exclusion zone contaminated as a result of ChNPP accident, and over 8000 small users of sources of ionising radiation in different fields of scientific, medical and industrial applications. The management of spent sources is carried out basing on the technology from the early sixties. In accordance with this scheme accepted sources are disposed of either in the near surface concrete vaults or in borehole facilities of typical design. Radioisotope devices and gamma units are placed into near surface vaults and sealed sources in capsules into borehole repositories respectively. Isotope content of radwaste in the repositories is multifarious including Co-60, Cs-137, Sr-90, Ir-192, Tl-204, Po-210, Ra-226, Pu-239, Am-241, H-3, Cf-252. A new programme for waste management has been adopted. It envisions the modifying of the 'Radon' facilities for long-term storage safety assessment and relocation of respective types of waste in 'Vector' repositories.Vector Complex will be built in the site which is located within the exclusion zone 10Km SW of the Chernobyl NPP. In Vector Complex two types of disposal facilities are designed to be in operation: 1) Near surface repositories for short lived LLRW and ILRW disposal in reinforced concrete containers. Repositories will be provided with multi layer waterproofing barriers - concrete slab on layer composed of mixture of sand and clay. Every layer of radwaste is supposed to be filled with 1cm clay layer following disposal; 2) Repositories for disposal of bulky radioactive waste without cans into concrete vaults. Approaches to safety assessment are discussed. Safety criteria for waste disposal in near surface repositories are established in Radiation Protection Standards (NRBU-97) and Addendum 'Radiation protection against sources of potential exposure

  8. A new concept of equivalent homogenization method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Jin; Pogoskekyan, Leonid; Kim, Young Il; Ju, Hyung Kook; Chang, Moon Hee [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-07-01

    A new concept of equivalent homogenization is proposed. The concept employs new set of homogenized parameters: homogenized cross sections (XS) and interface matrix (IM), which relates partial currents at the cell interfaces. The idea of interface matrix generalizes the idea of discontinuity factors (DFs), proposed and developed by K. Koebke and K. Smith. The offered concept covers both those of K. Koebke and K. Smith; both of them can be simulated within framework of new concept. Also, the offered concept covers Siemens KWU approach for baffle/reflector simulation, where the equivalent homogenized reflector XS are derived from the conservation of response matrix at the interface in 1D simi-infinite slab geometry. The IM and XS of new concept satisfy the same assumption about response matrix conservation in 1D semi-infinite slab geometry. It is expected that the new concept provides more accurate approximation of heterogeneous cell, especially in case of the steep flux gradients at the cell interfaces. The attractive shapes of new concept are: improved accuracy, simplicity of incorporation in the existing codes, equal numerical expenses in comparison to the K. Smith`s approach. The new concept is useful for: (a) explicit reflector/baffle simulation; (b) control blades simulation; (c) mixed UO{sub 2}/MOX core simulation. The offered model has been incorporated in the finite difference code and in the nodal code PANDOX. The numerical results show good accuracy of core calculations and insensitivity of homogenized parameters with respect to in-core conditions. 9 figs., 7 refs. (Author).

  9. Some spectral equivalences between Schroedinger operators

    International Nuclear Information System (INIS)

    Dunning, C; Hibberd, K E; Links, J

    2008-01-01

    Spectral equivalences of the quasi-exactly solvable sectors of two classes of Schroedinger operators are established, using Gaudin-type Bethe ansatz equations. In some instances the results can be extended leading to full isospectrality. In this manner we obtain equivalences between PT-symmetric problems and Hermitian problems. We also find equivalences between some classes of Hermitian operators

  10. The definition of the individual dose equivalent

    International Nuclear Information System (INIS)

    Ehrlich, Margarete

    1986-01-01

    A brief note examines the choice of the present definition of the individual dose equivalent, the new operational dosimetry quantity for external exposure. The consequences of the use of the individual dose equivalent and the danger facing the individual dose equivalent, as currently defined, are briefly discussed. (UK)

  11. The Sources of Science Teaching Self-efficacy among Elementary School Teachers: A mediational model approach

    Science.gov (United States)

    Wang, Ya-Ling; Tsai, Chin-Chung; Wei, Shih-Hsuan

    2015-09-01

    This study aimed to investigate the factors accounting for science teaching self-efficacy and to examine the relationships among Taiwanese teachers' science teaching self-efficacy, teaching and learning conceptions, technological-pedagogical content knowledge for the Internet (TPACK-I), and attitudes toward Internet-based instruction (Attitudes) using a mediational model approach. A total of 233 science teachers from 41 elementary schools in Taiwan were invited to take part in the study. After ensuring the validity and reliability of each questionnaire, the results indicated that each measure had satisfactory validity and reliability. Furthermore, through mediational models, the results revealed that TPACK-I and Attitudes mediated the relationship between teaching and learning conceptions and science teaching self-efficacy, suggesting that (1) knowledge of and attitudes toward Internet-based instruction (KATII) mediated the positive relationship between constructivist conceptions of teaching and learning and outcome expectancy, and that (2) KATII mediated the negative correlations between traditional conceptions of teaching and learning and teaching efficacy.

  12. Source Apportionment and Risk Assessment of Emerging Contaminants: An Approach of Pharmaco-Signature in Water Systems

    Science.gov (United States)

    Jiang, Jheng Jie; Lee, Chon Lin; Fang, Meng Der; Boyd, Kenneth G.; Gibb, Stuart W.

    2015-01-01

    This paper presents a methodology based on multivariate data analysis for characterizing potential source contributions of emerging contaminants (ECs) detected in 26 river water samples across multi-scape regions during dry and wet seasons. Based on this methodology, we unveil an approach toward potential source contributions of ECs, a concept we refer to as the “Pharmaco-signature.” Exploratory analysis of data points has been carried out by unsupervised pattern recognition (hierarchical cluster analysis, HCA) and receptor model (principal component analysis-multiple linear regression, PCA-MLR) in an attempt to demonstrate significant source contributions of ECs in different land-use zone. Robust cluster solutions grouped the database according to different EC profiles. PCA-MLR identified that 58.9% of the mean summed ECs were contributed by domestic impact, 9.7% by antibiotics application, and 31.4% by drug abuse. Diclofenac, ibuprofen, codeine, ampicillin, tetracycline, and erythromycin-H2O have significant pollution risk quotients (RQ>1), indicating potentially high risk to aquatic organisms in Taiwan. PMID:25874375

  13. Principle of natural and artificial radioactive series equivalency

    International Nuclear Information System (INIS)

    Vasilyeva, A.N.; Starkov, O.V.

    2001-01-01

    In the present paper one approach used under development of radioactive waste management conception is under consideration. This approach is based on the principle of natural and artificial radioactive series radiotoxic equivalency. The radioactivity of natural and artificial radioactive series has been calculated for 10 9 - years period. The toxicity evaluation for natural and artificial series has also been made. The correlation between natural radioactive series and their predecessors - actinides produced in thermal and fast reactors - has been considered. It has been shown that systematized reactor series data had great scientific significance and the principle of differential calculation of radiotoxicity was necessary to realize long-lived radioactive waste and uranium and thorium ore radiotoxicity equivalency conception. The calculations show that the execution of equivalency principle is possible for uranium series (4n+2, 4n+1). It is a problem for thorium. series. This principle is impracticable for neptunium series. (author)

  14. A simplified approach to estimating reference source terms for LWR designs

    International Nuclear Information System (INIS)

    1999-12-01

    systems. The publication of this IAEA technical document represents the conclusion of a task, initiated in 1996, devoted to the estimation of the radioactive source term in nuclear reactors. It focuses mainly on light water reactors (LWRs)

  15. Open source approaches to establishing Roseobacter clade bacteria as synthetic biology chassis for biogeoengineering

    Directory of Open Access Journals (Sweden)

    Yanika Borg

    2016-07-01

    Full Text Available Aim. The nascent field of bio-geoengineering stands to benefit from synthetic biologists’ efforts to standardise, and in so doing democratise, biomolecular research methods. Roseobacter clade bacteria comprise 15–20% of oceanic bacterio-plankton communities, making them a prime candidate for establishment of synthetic biology chassis for bio-geoengineering activities such as bioremediation of oceanic waste plastic. Developments such as the increasing affordability of DNA synthesis and laboratory automation continue to foster the establishment of a global ‘do-it-yourself’ research community alongside the more traditional arenas of academe and industry. As a collaborative group of citizen, student and professional scientists we sought to test the following hypotheses: (i that an incubator capable of cultivating bacterial cells can be constructed entirely from non-laboratory items, (ii that marine bacteria from the Roseobacter clade can be established as a genetically tractable synthetic biology chassis using plasmids conforming to the BioBrickTM standard and finally, (iii that identifying and subcloning genes from a Roseobacter clade species can readily by achieved by citizen scientists using open source cloning and bioinformatic tools. Method. We cultivated three Roseobacter species, Roseobacter denitrificans, Oceanobulbus indolifexand Dinoroseobacter shibae. For each species we measured chloramphenicol sensitivity, viability over 11 weeks of glycerol-based cryopreservation and tested the effectiveness of a series of electroporation and heat shock protocols for transformation using a variety of plasmid types. We also attempted construction of an incubator-shaker device using only publicly available components. Finally, a subgroup comprising citizen scientists designed and attempted a procedure for isolating the cold resistance anf1 gene from Oceanobulbus indolifexcells and subcloning it into a BioBrickTM formatted plasmid. Results. All

  16. CIS-based registration of quality of life in a single source approach.

    Science.gov (United States)

    Fritz, Fleur; Ständer, Sonja; Breil, Bernhard; Riek, Markus; Dugas, Martin

    2011-04-21

    Documenting quality of life (QoL) in routine medical care and using it both for treatment and for clinical research is not common, although such information is absolutely valuable for physicians and patients alike. We therefore aimed at developing an efficient method to integrate quality of life information into the clinical information system (CIS) and thus make it available for clinical care and secondary use. We piloted our method in three different medical departments, using five different QoL questionnaires. In this setting we used structured interviews and onsite observations to perform workflow and form analyses. The forms and pertinent data reports were implemented using the integrated tools of the local CIS. A web-based application for mobile devices was developed based on XML schemata to facilitate data import into the CIS. Data exports of the CIS were analysed with statistical software to perform an analysis of data quality. The quality of life questionnaires are now regularly documented by patients and physicians. The resulting data is available in the Electronic Health Record (EHR) and can be used for treatment purposes and communication as well as research functionalities. The completion of questionnaires by the patients themselves using a mobile device (iPad) and the import of the respective data into the CIS forms were successfully tested in a pilot installation. The quality of data is rendered high by the use of automatic score calculations as well as the automatic creation of forms for follow-up documentation. The QoL data was exported to research databases for use in scientific analysis. The CIS-based QoL is technically feasible, clinically accepted and provides an excellent quality of data for medical treatment and clinical research. Our approach with a commercial CIS and the web-based application is transferable to other sites.

  17. A variational approach to liver segmentation using statistics from multiple sources

    Science.gov (United States)

    Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi

    2018-01-01

    Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.

  18. The Potential of Open Source Information in Supporting Acquisition Pathway Analysis to Design IAEA State Level Approaches

    International Nuclear Information System (INIS)

    Renda, G.; Kim, L.; Jungwirth, R.; Pabian, F.; Wolfart, E.; Cojazzi, G.G.M.; )

    2015-01-01

    International Atomic Energy Agency (IAEA) safeguards designed to deter nuclear proliferation are constantly evolving to respond to new challenges. Within its State Level Concept, the IAEA envisions an objective-based and information-driven approach for designing and implementing State Level Approaches (SLAs), using all available measures to improve the effectiveness and efficiency of safeguards. The main Objectives of a SLA are a) to detect undeclared nuclear material or activities in the State, b) to detect undeclared production or processing of nuclear materials in declared facilities or locations outside facilities (LOFs), c) to detect diversion of declared nuclear material in declared facilities or LOFs. Under the SLA, States will be differentiated based upon objective State-Specific Factors that influence the design, planning, conduct and evaluation of safeguards activities. Proposed categories of factors include both technical and legal aspects, spanning from the deployed fuel cycle and the related state's technical capability to the type of safeguards agreements in force and the IAEA experience in implementing safeguards in that state. To design a SLA, the IAEA foresees the use of Acquisition Path Analysis (APA) to identify the plausible routes for acquiring weapons-usable material and to assess their safeguards significance. In order to achieve this goal, APA will have to identify possible acquisition paths, characterize them and eventually prioritize them. This paper will provide an overview of how the use of open source information (here loosely defined as any type of non-classified or proprietary information and including, but not limited to, media sources, government and non-governmental reports and analyzes, commercial data, satellite imagery, scientific/ technical literature, trade data) can support this activity in the various aspects of a typical APA approach. (author)

  19. Course design via Equivalency Theory supports equivalent student grades and satisfaction in online and face-to-face psychology classes

    Directory of Open Access Journals (Sweden)

    David eGarratt-Reed

    2016-05-01

    Full Text Available There has been a recent rapid growth in the number of psychology courses offered online through institutions of higher education. The American Psychological Association (APA has highlighted the importance of ensuring the effectiveness of online psychology courses. Despite this, there have been inconsistent findings regarding student grades, satisfaction, and retention in online psychology units. Equivalency Theory posits that online and classroom-based learners will attain equivalent learning outcomes when equivalent learning experiences are provided. We present a case study of an online introductory psychology unit designed to provide equivalent learning experiences to the pre-existing face-to-face version of the unit. Academic performance, student feedback, and retention data from 866 Australian undergraduate psychology students were examined to assess whether the online unit produced comparable outcomes to the ‘traditional’ unit delivered face-to-face. Student grades did not significantly differ between modes of delivery, except for a group-work based assessment where online students performed more poorly. Student satisfaction was generally high in both modes of the unit, with group-work the key source of dissatisfaction in the online unit. The results provide partial support for Equivalency Theory. The group-work based assessment did not provide an equivalent learning experience for students in the online unit highlighting the need for further research to determine effective methods of engaging students in online group activities. Consistent with previous research, retention rates were significantly lower in the online unit, indicating the need to develop effective strategies to increase online retention rates. While this study demonstrates successes in presenting online students with an equivalent learning experience, we recommend that future research investigates means of successfully facilitating collaborative group-work assessment

  20. Source identification of heavy metals in peri-urban agricultural soils of southeast China: An integrated approach.

    Science.gov (United States)

    Hu, Wenyou; Wang, Huifeng; Dong, Lurui; Huang, Biao; Borggaard, Ole K; Bruun Hansen, Hans Christian; He, Yue; Holm, Peter E

    2018-06-01

    Intensive human activities, in particular agricultural and industrial production have led to heavy metal accumulation in the peri-urban agricultural soils of China threatening soil environmental quality and agricultural product security. A combination of spatial analysis (SA), Pb isotope ratio analysis (IRA), input fluxes analysis (IFA), and positive matrix factorization (PMF) model was successfully used to assess the status and sources of heavy metals in typical peri-urban agricultural soils from a rapidly developing region of China. Mean concentrations of Cd, As, Hg, Pb, Cu, Zn and Cr in surface soils (0-20 cm) were 0.31, 11.2, 0.08, 35.6, 44.8, 119.0 and 97.0 mg kg -1 , respectively, exceeding the local background levels except for Hg. Spatial distribution of heavy metals revealed that agricultural activities have significant influence on heavy metal accumulation in the surface soils. Isotope ratio analysis suggested that fertilization along with atmospheric deposition were the major sources of heavy metal accumulation in the soils. Based on the PMF model, the relative contribution rates of the heavy metals due to fertilizer application, atmospheric deposition, industrial emission, and soil parent materials were 30.8%, 33.0%, 25.4% and 10.8%, respectively, demonstrating that anthropogenic activities had significantly higher contribution than natural sources. This study provides a reliable and robust approach for heavy metals source apportionment in this particular peri-urban area with a clear potential for future application in other regions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Loss distribution approach for operational risk capital modelling under Basel II: Combining different data sources for risk estimation

    Directory of Open Access Journals (Sweden)

    Pavel V. Shevchenko

    2013-07-01

    Full Text Available The management of operational risk in the banking industry has undergone significant changes over the last decade due to substantial changes in operational risk environment. Globalization, deregulation, the use of complex financial products and changes in information technology have resulted in exposure to new risks very different from market and credit risks. In response, Basel Committee for banking Supervision has developed a regulatory framework, referred to as Basel II, that introduced operational risk category and corresponding capital requirements. Over the past five years, major banks in most parts of the world have received accreditation under the Basel II Advanced Measurement Approach (AMA by adopting the loss distribution approach (LDA despite there being a number of unresolved methodological challenges in its implementation. Different approaches and methods are still under hot debate. In this paper, we review methods proposed in the literature for combining different data sources (internal data, external data and scenario analysis which is one of the regulatory requirement for AMA.

  2. Accounting for the Effects of Surface BRDF on Satellite Cloud and Trace-Gas Retrievals: A New Approach Based on Geometry-Dependent Lambertian-Equivalent Reflectivity Applied to OMI Algorithms

    Science.gov (United States)

    Vasilkov, Alexander; Qin, Wenhan; Krotkov, Nickolay; Lamsal, Lok; Spurr, Robert; Haffner, David; Joiner, Joanna; Yang, Eun-Su; Marchenko, Sergey

    2017-01-01

    Most satellite nadir ultraviolet and visible cloud, aerosol, and trace-gas algorithms make use of climatological surface reflectivity databases. For example, cloud and NO2 retrievals for the Ozone Monitoring Instrument (OMI) use monthly gridded surface reflectivity climatologies that do not depend upon the observation geometry. In reality, reflection of incoming direct and diffuse solar light from land or ocean surfaces is sensitive to the sun-sensor geometry. This dependence is described by the bidirectional reflectance distribution function (BRDF). To account for the BRDF, we propose to use a new concept of geometry-dependent Lambertian equivalent reflectivity (LER). Implementation within the existing OMI cloud and NO2 retrieval infrastructure requires changes only to the input surface reflectivity database. The geometry-dependent LER is calculated using a vector radiative transfer model with high spatial resolution BRDF information from the Moderate Resolution Imaging Spectroradiometer (MODIS) over land and the Cox-Munk slope distribution over ocean with a contribution from water-leaving radiance. We compare the geometry-dependent and climatological LERs for two wavelengths, 354 and 466 nm, that are used in OMI cloud algorithms to derive cloud fractions. A detailed comparison of the cloud fractions and pressures derived with climatological and geometry-dependent LERs is carried out. Geometry-dependent LER and corresponding retrieved cloud products are then used as inputs to our OMI NO2 algorithm. We find that replacing the climatological OMI-based LERs with geometry-dependent LERs can increase NO2 vertical columns by up to 50% in highly polluted areas; the differences include both BRDF effects and biases between the MODIS and OMI-based surface reflectance data sets. Only minor changes to NO2 columns (within 5 %) are found over unpolluted and overcast areas.

  3. Accounting for the effects of surface BRDF on satellite cloud and trace-gas retrievals: a new approach based on geometry-dependent Lambertian equivalent reflectivity applied to OMI algorithms

    Science.gov (United States)

    Vasilkov, Alexander; Qin, Wenhan; Krotkov, Nickolay; Lamsal, Lok; Spurr, Robert; Haffner, David; Joiner, Joanna; Yang, Eun-Su; Marchenko, Sergey

    2017-01-01

    Most satellite nadir ultraviolet and visible cloud, aerosol, and trace-gas algorithms make use of climatological surface reflectivity databases. For example, cloud and NO2 retrievals for the Ozone Monitoring Instrument (OMI) use monthly gridded surface reflectivity climatologies that do not depend upon the observation geometry. In reality, reflection of incoming direct and diffuse solar light from land or ocean surfaces is sensitive to the sun-sensor geometry. This dependence is described by the bidirectional reflectance distribution function (BRDF). To account for the BRDF, we propose to use a new concept of geometry-dependent Lambertian equivalent reflectivity (LER). Implementation within the existing OMI cloud and NO2 retrieval infrastructure requires changes only to the input surface reflectivity database. The geometry-dependent LER is calculated using a vector radiative transfer model with high spatial resolution BRDF information from the Moderate Resolution Imaging Spectroradiometer (MODIS) over land and the Cox-Munk slope distribution over ocean with a contribution from water-leaving radiance. We compare the geometry-dependent and climatological LERs for two wavelengths, 354 and 466 nm, that are used in OMI cloud algorithms to derive cloud fractions. A detailed comparison of the cloud fractions and pressures derived with climatological and geometry-dependent LERs is carried out. Geometry-dependent LER and corresponding retrieved cloud products are then used as inputs to our OMI NO2 algorithm. We find that replacing the climatological OMI-based LERs with geometry-dependent LERs can increase NO2 vertical columns by up to 50 % in highly polluted areas; the differences include both BRDF effects and biases between the MODIS and OMI-based surface reflectance data sets. Only minor changes to NO2 columns (within 5 %) are found over unpolluted and overcast areas.

  4. Seeing through Symbols: The Case of Equivalent Expressions.

    Science.gov (United States)

    Kieran, Carolyn; Sfard, Anna

    1999-01-01

    Presents a teaching experiment to turn students from external observers into active participants in a game of algebra learning where students use graphs to build meaning for equivalence of algebraic expressions. Concludes that the graphic-functional approach seems to make the introduction to algebra much more meaningful for the learner. (ASK)

  5. Equivalent Circuit Modeling of a Rotary Piezoelectric Motor

    DEFF Research Database (Denmark)

    El, Ghouti N.; Helbo, Jan

    2000-01-01

    In this paper, an enhanced equivalent circuit model of a rotary traveling wave piezoelectric ultrasonic motor "shinsei type USR60" is derived. The modeling is performed on the basis of an empirical approach combined with the electrical network method and some simplification assumptions about the ...

  6. Simulation Study of Near-Surface Coupling of Nuclear Devices vs. Equivalent High-Explosive Charges

    Energy Technology Data Exchange (ETDEWEB)

    Fournier, Kevin B [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walton, Otis R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Benjamin, Russ [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunlop, William H [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-29

    A computational study was performed to examine the differences in near-surface ground-waves and air-blast waves generated by high-explosive energy sources and those generated by much higher energy - density low - yield nuclear sources. The study examined the effect of explosive-source emplacement (i.e., height-of-burst, HOB, or depth-of-burial, DOB) over a range from depths of -35m to heights of 20m, for explosions with an explosive yield of 1-kt . The chemical explosive was modeled by a JWL equation-of-state model for a ~14m diameter sphere of ANFO (~1,200,000kg – 1 k t equivalent yield ), and the high-energy-density source was modeled as a one tonne (1000 kg) plasma of ‘Iron-gas’ (utilizing LLNL’s tabular equation-of-state database, LEOS) in a 2m diameter sphere, with a total internal-energy content equivalent to 1 k t . A consistent equivalent-yield coupling-factor approach was developed to compare the behavior of the two sources. The results indicate that the equivalent-yield coupling-factor for air-blasts from 1 k t ANFO explosions varies monotonically and continuously from a nearly perfec t reflected wave off of the ground surface for a HOB ≈ 20m, to a coupling factor of nearly zero at DOB ≈ -25m. The nuclear air - blast coupling curve, on the other hand, remained nearly equal to a perfectly reflected wave all the way down to HOB’s very near zero, and then quickly dropped to a value near zero for explosions with a DOB ≈ -10m. The near - surface ground - wave traveling horizontally out from the explosive source region to distances of 100’s of meters exhibited equivalent - yield coupling - factors t hat varied nearly linearly with HOB/DOB for the simulated ANFO explosive source, going from a value near zero at HOB ≈ 5m to nearly one at DOB ≈ -25m. The nuclear-source generated near-surface ground wave coupling-factor remained near zero for almost all HOB’s greater than zero, and then appeared to vary nearly - linearly with depth

  7. Assessment of Potential Nitrate Pollution Sources in the Marano Lagoon (Italy) and its Catch Mentarea Using a Multi Isotope Approach

    Energy Technology Data Exchange (ETDEWEB)

    Saccon, P.; Leis, A. [Joanneum Research Forschungsgesellschaft mbH, Institute for Water, Energy and Sustainability, Graz (Austria); Marca, A.; Kaiser, J.; Campisi, L. [School of Environmental Sciences, University of East Anglia, Norwich (United Kingdom); Savarino, J.; Erbland, J. [UJF-Grenoble 1/CNRS-INSU, Laboratoire de Glaciologie et Geophysique de l' Environnement, St.-Martin-d' Heres (France); Boettcher, M. E. [Leibniz Institute for Baltic Sea Research, Geochemistry and Isotope Geochemistry Group, Marine Geology Section, Warnemuende (Germany); Eisenhauer, A. [IFM-GEOMAR, Kiel (Germany); Sueltenfuss, J. [University of Bremen, Institute of Environmental Physics, Section of Oceanography, Bremen (Germany)

    2013-07-15

    The aims of this study were mainly: (i) the identification and differentiation of the main anthropogenic nitrogen sources in the Marano Lagoon (Italy) and its catchment area; and (ii) the assessment of the intra-lagoonal water circulation, the morphological development of the lagoon and its anthropogenic pressure by applying a combined approach of hydrochemical, isotopic and remote sensing techniques. To achieve the aforementioned targets analyses of the stable isotope signatures of nitrate, boron, water and sulphate have been used. Moreover the residence times of groundwater were determined by the tritium-helium dating method. To characterize the chemical composition of the different water types the concentrations of the major ions and nutrients as well as the physicochemical parameters have been measured. Remote sensing techniques have been applied to assess the spatial distribution of the most superficial algal flora, water temperature as well as the key environmental and morphological changes of the lagoon since the beginning of the 1970s. (author)

  8. Assessment of Potential Nitrate Pollution Sources in the Marano Lagoon (Italy) and its Catchment Area Using a Multi Isotope Approach

    International Nuclear Information System (INIS)

    Saccon, P.; Leis, A.; Marca, A.; Kaiser, J.; Campisi, L.; Savarino, J.; Erbland, J.; Boettcher, M.E.; Eisenhauer, A.; Sueltenfuss, J.

    2011-01-01

    The aims of this study were mainly: (i) the identification and differentiation of the main anthropogenic nitrogen sources in the Marano Lagoon (Italy) and its catchment area; and (ii) the assessment of the intra-lagoonal water circulation, the morphological development of the lagoon and its anthropogenic pressure by applying a combined approach of hydrochemical, isotopic and remote sensing techniques. To achieve the aforementioned targets analyses of stable isotope signatures of nitrate, boron, water and sulphate have been used. Moreover the residence times of groundwater were determined by the tritium-helium dating method. To characterize the chemical composition of the different water types the concentrations of the major ions and nutrients as well as the physicochemical parameters have been measured. Remote sensing techniques have been applied to assess the spatial distribution of most superficial algal flora, water temperature as well as the key environmental and morphological changes of the lagoon since the beginning of the 1970s.

  9. New approaches for the analysis of audience use: patterns and value creation sources for the financial news companies

    Directory of Open Access Journals (Sweden)

    Alfonso Vara Miguel

    2011-12-01

    Full Text Available The new post-neoclassic economic approaches developed by the Behavioural Economics and the New Institutional Economics theories opened the door for non economical disciplines to explain the real economic behavior of the agents (individuals, organizations, companies and government. Within this disciplines, economists began to integrate the media and the media theory into the economic theory. This new research approach for Media Economics and Media Management is significant because it focus on the social function of the communication industry from an economic perspective. Also, it represents an opportunity for media scholars to advance in the understanding of the functions of the media in a given economic system and its development, the use that the audience does of the economic information and media, and the sources of value creation for this media companies. From this new perspective, four functions and sources to create value for the economic media companies where analyzed: i the diffusion of the information and meanings that promotes people’s par- ticipation in the economic and financial systems; ii the impact in the information asymmetry and transaction costs in the markets; iii the enhancement of the coordination between the economic agents that allows the implementation of policies for economic development; iv the creation of ideological platforms of debate that contribute to the diffusion and social validation of economic ideas that allow the coordination mentioned above. The study of the case of Expansión shows how this business media performed the first two functions to become the leader of the segment for more than 20 years, but couldn’t fulfill the other two. Therefore, it was not able to become a public reference and authority as the general information media, just like happened with other economic newspapers in Europe or the Wall Street Journal in the United States.

  10. A Community Standard: Equivalency of Healthcare in Australian Immigration Detention.

    Science.gov (United States)

    Essex, Ryan

    2017-08-01

    The Australian government has long maintained that the standard of healthcare provided in its immigration detention centres is broadly comparable with health services available within the Australian community. Drawing on the literature from prison healthcare, this article examines (1) whether the principle of equivalency is being applied in Australian immigration detention and (2) whether this standard of care is achievable given Australia's current policies. This article argues that the principle of equivalency is not being applied and that this standard of health and healthcare will remain unachievable in Australian immigration detention without significant reform. Alternate approaches to addressing the well documented issues related to health and healthcare in Australian immigration detention are discussed.

  11. The Public Market Equivalent and Private Equity Performance

    DEFF Research Database (Denmark)

    Sørensen, Morten; Jagannathan, Ravi

    2015-01-01

    The authors show that the public market equivalent approach is equivalent to assessing the performance of private equity (PE) investments using Rubinstein’s dynamic version of the CAPM. They developed two insights: (1) one need not compute betas of PE investments, and any changes in PE cash flow...... betas due to changes in financial leverage, operating leverage, or the nature of the business are automatically taken into account; (2) the public market index used in evaluations should be the one that best approximates the wealth portfolio of the investor considering the PE investment opportunity....

  12. New equivalent lumped electrical circuit for piezoelectric transformers.

    Science.gov (United States)

    Gonnard, Paul; Schmitt, P M; Brissaud, Michel

    2006-04-01

    A new equivalent circuit is proposed for a contour-vibration-mode piezoelectric transformer (PT). It is shown that the usual lumped equivalent circuit derived from the conventional Mason approach is not accurate. The proposed circuit, built on experimental measurements, makes an explicit difference between the elastic energies stored respectively on the primary and secondary parts. The experimental and theoretical resonance frequencies with the secondary in open or short circuit are in good agreement as well as the output "voltage-current" characteristic and the optimum efficiency working point. This circuit can be extended to various PT configurations and appears to be a useful tool for modeling electronic devices that integrate piezoelectric transformers.

  13. Verification of an effective dose equivalent model for neutrons

    International Nuclear Information System (INIS)

    Tanner, J.E.; Piper, R.K.; Leonowich, J.A.; Faust, L.G.

    1992-01-01

    Since the effective dose equivalent, based on the weighted sum of organ dose equivalents, is not a directly measurable quantity, it must be estimated with the assistance of computer modelling techniques and a knowledge of the incident radiation field. Although extreme accuracy is not necessary for radiation protection purposes, a few well chosen measurements are required to confirm the theoretical models. Neutron doses and dose equivalents were measured in a RANDO phantom at specific locations using thermoluminescence dosemeters, etched track dosemeters, and a 1.27 cm (1/2 in) tissue-equivalent proportional counter. The phantom was exposed to a bare and a D 2 O-moderated 252 Cf neutron source at the Pacific Northwest Laboratory's Low Scatter Facility. The Monte Carlo code MCNP with the MIRD-V mathematical phantom was used to model the human body and to calculate the organ doses and dose equivalents. The experimental methods are described and the results of the measurements are compared with the calculations. (author)

  14. Online tuning of impedance matching circuit for long pulse inductively coupled plasma source operation—An alternate approach

    International Nuclear Information System (INIS)

    Sudhir, Dass; Bandyopadhyay, M.; Chakraborty, A.; Kraus, W.; Gahlaut, A.; Bansal, G.

    2014-01-01

    Impedance matching circuit between radio frequency (RF) generator and the plasma load, placed between them, determines the RF power transfer from RF generator to the plasma load. The impedance of plasma load depends on the plasma parameters through skin depth and plasma conductivity or resistivity. Therefore, for long pulse operation of inductively coupled plasmas, particularly for high power (∼100 kW or more) where plasma load condition may vary due to different reasons (e.g., pressure, power, and thermal), online tuning of impedance matching circuit is necessary through feedback. In fusion grade ion source operation, such online methodology through feedback is not present but offline remote tuning by adjusting the matching circuit capacitors and tuning the driving frequency of the RF generator between the ion source operation pulses is envisaged. The present model is an approach for remote impedance tuning methodology for long pulse operation and corresponding online impedance matching algorithm based on RF coil antenna current measurement or coil antenna calorimetric measurement may be useful in this regard

  15. Determination of nitrate pollution sources in the Marano Lagoon (Italy) by using a combined approach of hydrochemical and isotopic techniques

    Energy Technology Data Exchange (ETDEWEB)

    Saccon, Pierpaolo; Leis, Albrecht [JOANNEUM RESEARCH Forschungsgesellschaft mbH, Institute for Water, Energy and Sustainability, 8010 Graz (Austria); Marca, Alina; Kaiser, Jan; Campisi, Laura [School of Environmental Sciences, University of East Anglia, NR4 7TJ Norwich (United Kingdom); Boettcher, Michael E.; Escher, Peter [Leibniz Institute for Baltic Sea Research (IOW), Geochemistry and Isotope Geochemistry Group, D-18119 Rostock (Germany); Savarino, Joel; Erbland, Joseph [UJF-Grenoble 1/CNRS-INSU, Laboratoire de Glaciologie et Geophysique de l' Environnement (LGGE) UMR 5183 (France); Eisenhauer, Anton [GEOMAR, Helmholtz Zentrum fuer Ozean Forschung Kiel, Wischhofstr. 1-3, 24148 Kiel (Germany)

    2013-07-01

    Due to increased pollution by nitrate from intensive agricultural and other anthropogenic activities the Marano lagoon (northeast Italy) and part of its catchment area have been investigated, applying a combined approach of hydrochemical and isotopic techniques. Thus, to identify and characterize the potential multiple-sources of nitrate pollution the isotopic compositions of nitrate (δ{sup 15}N, δ{sup 18}O, and Δ{sup 17}O), boron (δ{sup 11}B), water (δ{sup 2}H and δ{sup 18}O), and sulphate (δ{sup 34}S and δ{sup 18}O), as well as the chemical composition of different water types have been determined. In the monitoring program water samples from the lagoon, its tributary rivers, the groundwater upwelling line, groundwater, sewage, and open sea on a quarterly interval from 2009 to 2010 have been collected and analyzed. Coupling isotopic and hydrochemical results indicate that the nitrate load in the lagoon was not only derived from agriculture activities but also from other sources such as urban wastewaters, in situ nitrification, and atmospheric deposition. However, none of the samples showed the isotopic characteristics of synthetic fertilizers. (authors)

  16. Comparative Effectiveness of Usual Source of Care Approaches to Improve End-of-Life Outcomes for Children With Intellectual Disability.

    Science.gov (United States)

    Lindley, Lisa C; Cozad, Melanie J

    2017-09-01

    Children with intellectual disability (ID) are at risk for adverse end-of-life outcomes including high emergency room utilization and hospital readmissions, along with low hospice enrollment. The objective of this study was to compare the effectiveness of usual source of care approaches to improve end-of-life outcomes for children with ID. We used longitudinal California Medicaid claims data. Children were included who were 21 years with fee-for-service Medicaid claims, died between January 1, 2007, and December 31, 2010, and had a moderate-to-profound ID diagnosis. End-of-life outcomes (i.e., hospice enrollment, emergency room utilization, hospital readmissions) were measured via claims data. Our treatments were usual source of care (USC) only vs. usual source of care plus targeted case management (USC plus TCM). Using instrumental variable analysis, we compared the effectiveness of treatments on end-of-life outcomes. Ten percent of children with ID enrolled in hospice, 73% used the emergency room, and 20% had three or more hospital admissions in their last year of life. USC plus TCM relative to USC only had no effect on hospice enrollment; however, it significantly reduced the probability of emergency room utilization (B = -1.29, P life outcomes for children with ID. Further study of the extent of UCS and TCM involvement in reducing emergency room utilization and hospital readmissions at end of life is needed. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  17. MODIFIED APPROACH FOR SITE SELECTION OF UNWANTED RADIOACTIVE SEALED SOURCES DISPOSAL IN ARID COUNTRIES (CASE STUDY - EGYPT)

    International Nuclear Information System (INIS)

    ABDEL AZIZ, M.A.H.; COCHRAN, J.R.

    2008-01-01

    The aim of this study is to present a systematic methodology for siting of radioactive sealed sources disposal in arid countries and demonstrate the use of this methodology in Egypt. Availing from the experience gained from the greater confinement disposal (GCD) boreholes in Nevada, USA, the IAEA's approach for siting of near disposal was modified to fit the siting of the borehole disposal which suits the unwanted radioactive sealed sources. The modifications are represented by dividing the surveyed area into three phases; the exclusion phase in which the areas that meet exclusion criteria should be excluded, the site selection phase in which some potential sites that meet the primary criteria should be candidate and the preference stage in which the preference between the potential candidate sites should be carried out based on secondary criteria to select one or two sites at most. In Egypt, a considerable amount of unwanted radioactive sealed sources wastes have accumulated due to the peaceful uses of radio-isotopes.Taking into account the regional aspects and combining of the proposed developed methodology with geographic information system (GIS), the Nile Delta and its valley, the Sinai Peninsula and areas of historical heritage value are excluded from our concern as potential areas for radioactive waste disposal. Using the primary search criteria, some potential sites south Kharga, the Great Sand Sea, Gilf El-Kebear and the central part of the eastern desert have been identified as candidate areas meeting the primary criteria of site selection. More detailed studies should be conducted taking into account the secondary criteria to prefer among the above sites and select one or two sites at most

  18. Financing the HIV response in sub-Saharan Africa from domestic sources: Moving beyond a normative approach.

    Science.gov (United States)

    Remme, Michelle; Siapka, Mariana; Sterck, Olivier; Ncube, Mthuli; Watts, Charlotte; Vassall, Anna

    2016-11-01

    Despite optimism about the end of AIDS, the HIV response requires sustained financing into the future. Given flat-lining international aid, countries' willingness and ability to shoulder this responsibility will be central to access to HIV care. This paper examines the potential to expand public HIV financing, and the extent to which governments have been utilising these options. We develop and compare a normative and empirical approach. First, with data from the 14 most HIV-affected countries in sub-Saharan Africa, we estimate the potential increase in public HIV financing from economic growth, increased general revenue generation, greater health and HIV prioritisation, as well as from more unconventional and innovative sources, including borrowing, health-earmarked resources, efficiency gains, and complementary non-HIV investments. We then adopt a novel empirical approach to explore which options are most likely to translate into tangible public financing, based on cross-sectional econometric analyses of 92 low and middle-income country governments' most recent HIV expenditure between 2008 and 2012. If all fiscal sources were simultaneously leveraged in the next five years, public HIV spending in these 14 countries could increase from US$3.04 to US$10.84 billion per year. This could cover resource requirements in South Africa, Botswana, Namibia, Kenya, Nigeria, Ethiopia, and Swaziland, but not even half the requirements in the remaining countries. Our empirical results suggest that, in reality, even less fiscal space could be created (a reduction by over half) and only from more conventional sources. International financing may also crowd in public financing. Most HIV-affected lower-income countries in sub-Saharan Africa will not be able to generate sufficient public resources for HIV in the medium-term, even if they take very bold measures. Considerable international financing will be required for years to come. HIV funders will need to engage with broader

  19. The Complexity of Identifying Large Equivalence Classes

    DEFF Research Database (Denmark)

    Skyum, Sven; Frandsen, Gudmund Skovbjerg; Miltersen, Peter Bro

    1999-01-01

    We prove that at least 3k−4/k(2k−3)(n/2) – O(k)equivalence tests and no more than 2/k (n/2) + O(n) equivalence tests are needed in the worst case to identify the equivalence classes with at least k members in set of n elements. The upper bound is an improvement by a factor 2 compared to known res...

  20. Equivalent Simplification Method of Micro-Grid

    OpenAIRE

    Cai Changchun; Cao Xiangqin

    2013-01-01

    The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...

  1. Equivalences of real submanifolds in complex space.

    OpenAIRE

    ZAITSEV, DMITRI

    2001-01-01

    PUBLISHED We show that for any real-analytic submanifold M in CN there is a proper real-analytic subvariety V contained in M such that for any p ? M \\ V , any real-analytic submanifold M? in CN, and any p? ? M?, the germs of the submanifolds M and M? at p and p? respectively are formally equivalent if and only if they are biholomorphically equivalent. More general results for k-equivalences are also stated and proved.

  2. Relations of equivalence of conditioned radioactive waste

    International Nuclear Information System (INIS)

    Kumer, L.; Szeless, A.; Oszuszky, F.

    1982-01-01

    A compensation for the wastes remaining with the operator of a waste management center, to be given by the agent having caused the waste, may be assured by effecting a financial valuation (equivalence) of wastes. Technically and logically, this equivalence between wastes (or specifically between different waste categories) and financial valuation has been established as reasonable. In this paper, the possibility of establishing such equivalences are developed, and their suitability for waste management concepts is quantitatively expressed

  3. Equivalence in Bilingual Lexicography: Criticism and Suggestions*

    Directory of Open Access Journals (Sweden)

    Herbert Ernst Wiegand

    2011-10-01

    Full Text Available

    Abstract: A reminder of general problems in the formation of terminology, as illustrated by theGerman Äquivalence (Eng. equivalence and äquivalent (Eng. equivalent, is followed by a critical discussionof the concept of equivalence in contrastive lexicology. It is shown that especially the conceptof partial equivalence is contradictory in its different manifestations. Consequently attemptsare made to give a more precise indication of the concept of equivalence in the metalexicography,with regard to the domain of the nominal lexicon. The problems of especially the metalexicographicconcept of partial equivalence as well as that of divergence are fundamentally expounded.In conclusion the direction is indicated to find more appropriate metalexicographic versions of theconcept of equivalence.

    Keywords: EQUIVALENCE, LEXICOGRAPHIC EQUIVALENT, PARTIAL EQUIVALENCE,CONGRUENCE, DIVERGENCE, CONVERGENCE, POLYDIVERGENCE, SYNTAGM-EQUIVALENCE,ZERO EQUIVALENCE, CORRESPONDENCE

    Abstrakt: Äquivalenz in der zweisprachigen Lexikographie: Kritik und Vorschläge.Nachdem an allgemeine Probleme der Begriffsbildung am Beispiel von dt. Äquivalenzund dt. äquivalent erinnert wurde, wird zunächst auf Äquivalenzbegriffe in der kontrastiven Lexikologiekritisch eingegangen. Es wird gezeigt, dass insbesondere der Begriff der partiellen Äquivalenzin seinen verschiedenen Ausprägungen widersprüchlich ist. Sodann werden Präzisierungenzu den Äquivalenzbegriffen in der Metalexikographie versucht, die sich auf den Bereich der Nennlexikbeziehen. Insbesondere der metalexikographische Begriff der partiellen Äquivalenz sowie derder Divergenz werden grundsätzlich problematisiert. In welche Richtung man gehen kann, umangemessenere metalexikographische Fassungen des Äquivalenzbegriffs zu finden, wird abschließendangedeutet.

    Stichwörter: ÄQUIVALENZ, LEXIKOGRAPHISCHES ÄQUIVALENT, PARTIELLE ÄQUIVALENZ,KONGRUENZ, DIVERGENZ, KONVERGENZ, POLYDIVERGENZ

  4. Water-equivalence of gel dosimeters for radiology medical imaging

    International Nuclear Information System (INIS)

    Valente, M; Vedelago, J.; Perez, P.; Chacon, D.; Mattea, F.; Velasquez, J.

    2017-10-01

    International dosimetry protocols are based on determinations of absorbed dose to water. Ideally, the phantom material should be water equivalent; that is, it should have the same absorption and scatter properties as water. This study presents theoretical, experimental and Monte Carlo modeling of water-equivalence of Fricke and polymer (NIPAM, PAGAT and itaconic acid ITABIS) gel dosimeters. Mass and electronic densities along with effective atomic number were calculated by means of theoretical approaches. Samples were scanned by standard computed tomography and high-resolution micro computed tomography. Photon mass attenuation coefficients and electron stopping powers were examined by Monte Carlo simulations. Theoretical, Monte Carlo and experimental results confirmed good water-equivalence for all gel dosimeters. Overall variations with respect to water in the low energy radiology range (up to 130 k Vp) were found to be less than 3% in average. (Author)

  5. Water-equivalence of gel dosimeters for radiology medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Valente, M; Vedelago, J.; Perez, P. [Instituto de Fisica Enrique Gaviola - CONICET, Av. Medina Allende s/n, Ciudad Universitaria, X5000HUA, Cordoba (Argentina); Chacon, D.; Mattea, F. [Universidad Nacional de Cordoba, FAMAF, Laboratorio de Investigacion e Instrumentacion en Fisica Aplicada a la Medicina e Imagenes por Rayos X, Av. Medina Allende s/n, Ciudad Universitaria, X5000HUA Cordoba (Argentina); Velasquez, J., E-mail: valente@famaf.unc.edu.ar [ICOS Inmunomedica, Lago Puyehue 01745, Temuco (Chile)

    2017-10-15

    International dosimetry protocols are based on determinations of absorbed dose to water. Ideally, the phantom material should be water equivalent; that is, it should have the same absorption and scatter properties as water. This study presents theoretical, experimental and Monte Carlo modeling of water-equivalence of Fricke and polymer (NIPAM, PAGAT and itaconic acid ITABIS) gel dosimeters. Mass and electronic densities along with effective atomic number were calculated by means of theoretical approaches. Samples were scanned by standard computed tomography and high-resolution micro computed tomography. Photon mass attenuation coefficients and electron stopping powers were examined by Monte Carlo simulations. Theoretical, Monte Carlo and experimental results confirmed good water-equivalence for all gel dosimeters. Overall variations with respect to water in the low energy radiology range (up to 130 k Vp) were found to be less than 3% in average. (Author)

  6. A Model for Semantic Equivalence Discovery for Harmonizing Master Data

    Science.gov (United States)

    Piprani, Baba

    IT projects often face the challenge of harmonizing metadata and data so as to have a "single" version of the truth. Determining equivalency of multiple data instances against the given type, or set of types, is mandatory in establishing master data legitimacy in a data set that contains multiple incarnations of instances belonging to the same semantic data record . The results of a real-life application define how measuring criteria and equivalence path determination were established via a set of "probes" in conjunction with a score-card approach. There is a need for a suite of supporting models to help determine master data equivalency towards entity resolution—including mapping models, transform models, selection models, match models, an audit and control model, a scorecard model, a rating model. An ORM schema defines the set of supporting models along with their incarnation into an attribute based model as implemented in an RDBMS.

  7. VERSE - Virtual Equivalent Real-time Simulation

    Science.gov (United States)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  8. A novel library-independent approach based on high-throughput cultivation in Bioscreen and fingerprinting by FTIR spectroscopy for microbial source tracking in food industry.

    Science.gov (United States)

    Shapaval, V; Møretrø, T; Wold Åsli, A; Suso, H P; Schmitt, J; Lillehaug, D; Kohler, A

    2017-05-01

    Microbiological source tracking (MST) for food industry is a rapid growing area of research and technology development. In this paper, a new library-independent approach for MST is presented. It is based on a high-throughput liquid microcultivation and FTIR spectroscopy. In this approach, FTIR spectra obtained from micro-organisms isolated along the production line and a product are compared to each other. We tested and evaluated the new source tracking approach by simulating a source tracking situation. In this simulation study, a selection of 20 spoilage mould strains from a total of six genera (Alternaria, Aspergillus, Mucor, Paecilomyces, Peyronellaea and Phoma) was used. The simulation of the source tracking situation showed that 80-100% of the sources could be correctly identified with respect to genus/species level. When performing source tracking simulations, the FTIR identification diverged for Phoma glomerata strain in the reference collection. When reidentifying the strain by sequencing, it turned out that the strain was a Peyronellaea arachidicola. The obtained results demonstrated that the proposed approach is a versatile tool for identifying sources of microbial contamination. Thus, it has a high potential for routine control in the food industry due to low costs and analysis time. The source tracking of fungal contamination in the food industry is an important aspect of food safety. Currently, all available methods are time consuming and require the use of a reference library that may limit the accuracy of the identification. In this study, we report for the first time, a library-independent FTIR spectroscopic approach for MST of fungal contamination along the food production line. It combines high-throughput microcultivation and FTIR spectroscopy and is specific on the genus and species level. Therefore, such an approach possesses great importance for food safety control in food industry. © 2016 The Society for Applied Microbiology.

  9. Review of Recent Development of Dynamic Wind Farm Equivalent Models Based on Big Data Mining

    Science.gov (United States)

    Wang, Chenggen; Zhou, Qian; Han, Mingzhe; Lv, Zhan’ao; Hou, Xiao; Zhao, Haoran; Bu, Jing

    2018-04-01

    Recently, the big data mining method has been applied in dynamic wind farm equivalent modeling. In this paper, its recent development with present research both domestic and overseas is reviewed. Firstly, the studies of wind speed prediction, equivalence and its distribution in the wind farm are concluded. Secondly, two typical approaches used in the big data mining method is introduced, respectively. For single wind turbine equivalent modeling, it focuses on how to choose and identify equivalent parameters. For multiple wind turbine equivalent modeling, the following three aspects are concentrated, i.e. aggregation of different wind turbine clusters, the parameters in the same cluster, and equivalence of collector system. Thirdly, an outlook on the development of dynamic wind farm equivalent models in the future is discussed.

  10. The radiobiology of boron neutron capture therapy: Are ''photon-equivalent'' doses really photon-equivalent?

    International Nuclear Information System (INIS)

    Coderre, J.A.; Diaz, A.Z.; Ma, R.

    2001-01-01

    Boron neutron capture therapy (BNCT) produces a mixture of radiation dose components. The high-linear energy transfer (LET) particles are more damaging in tissue than equal doses of low-LET radiation. Each of the high-LET components can multiplied by an experimentally determined factor to adjust for the increased biological effectiveness and the resulting sum expressed in photon-equivalent units (Gy-Eq). BNCT doses in photon-equivalent units are based on a number of assumptions. It may be possible to test the validity of these assumptions and the accuracy of the calculated BNCT doses by 1) comparing the effects of BNCT in other animal or biological models where the effects of photon radiation are known, or 2) if there are endpoints reached in the BNCT dose escalation clinical trials that can be related to the known response to photons of the tissue in question. The calculated Gy-Eq BNCT doses delivered to dogs and to humans with BPA and the epithermal neutron beam of the Brookhaven Medical Research Reactor were compared to expected responses to photon irradiation. The data indicate that Gy-Eq doses in brain may be underestimated. Doses to skin are consistent with the expected response to photons. Gy-Eq doses to tumor are significantly overestimated. A model system of cells in culture irradiated at various depths in a lucite phantom using the epithermal beam is under development. Preliminary data indicate that this approach can be used to detect differences in the relative biological effectiveness of the beam. The rat 9L gliosarcoma cell survival data was converted to photon-equivalent doses using the same factors assumed in the clinical studies. The results superimposed on the survival curve derived from irradiation with Cs-137 photons indicating the potential utility of this model system. (author)

  11. Sound field reproduction as an equivalent acoustical scattering problem.

    Science.gov (United States)

    Fazi, Filippo Maria; Nelson, Philip A

    2013-11-01

    Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.

  12. Validity of the Aluminum Equivalent Approximation in Space Radiation Shielding

    Science.gov (United States)

    Badavi, Francis F.; Adams, Daniel O.; Wilson, John W.

    2009-01-01

    The origin of the aluminum equivalent shield approximation in space radiation analysis can be traced back to its roots in the early years of the NASA space programs (Mercury, Gemini and Apollo) wherein the primary radiobiological concern was the intense sources of ionizing radiation causing short term effects which was thought to jeopardize the safety of the crew and hence the mission. Herein, it is shown that the aluminum equivalent shield approximation, although reasonably well suited for that time period and to the application for which it was developed, is of questionable usefulness to the radiobiological concerns of routine space operations of the 21 st century which will include long stays onboard the International Space Station (ISS) and perhaps the moon. This is especially true for a risk based protection system, as appears imminent for deep space exploration where the long-term effects of Galactic Cosmic Ray (GCR) exposure is of primary concern. The present analysis demonstrates that sufficiently large errors in the interior particle environment of a spacecraft result from the use of the aluminum equivalent approximation, and such approximations should be avoided in future astronaut risk estimates. In this study, the aluminum equivalent approximation is evaluated as a means for estimating the particle environment within a spacecraft structure induced by the GCR radiation field. For comparison, the two extremes of the GCR environment, the 1977 solar minimum and the 2001 solar maximum, are considered. These environments are coupled to the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport (HZETRN), which propagates the GCR spectra for elements with charges (Z) in the range I aluminum equivalent approximation for a good polymeric shield material such as genetic polyethylene (PE). The shield thickness is represented by a 25 g/cm spherical shell. Although one could imagine the progression to greater

  13. The short-circuit concept used in field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1990-01-01

    In field equivalence principles, electric and magnetic surface currents are specified and considered as impressed currents. Often the currents are placed on perfect conductors. It is shown that these currents can be treated through two approaches. The first approach is decomposition of the total...... field into partial fields caused by the individual impressed currents. When this approach is used, it is shown that, on a perfect electric (magnetic) conductor, impressed electric (magnetic) surface currents are short-circuited. The second approach is to note that, since Maxwell's equations...... and the boundary conditions are satisfied, none of the impressed currents is short-circuited and no currents are induced on the perfect conductors. Since all currents and field quantities are considered at the same time, this approach is referred to as the total-field approach. The partial-field approach leads...

  14. PHRASEOLOGISM IN GERMAN HOROSCOPES AND THEIR TURKISH EQUIVALENTS / PHRASEOLOGISMEN IN DEUTSCHEN HOROSKOPTEXTEN UND IHRE TÜRKISCHEN ENTSPRECHUNGEN

    Directory of Open Access Journals (Sweden)

    Nihan DEMİRYAY

    2016-04-01

    Full Text Available he importance of phraseology in communication is widely recognized and it plays an essential role for written and spoken language. Consequently it has a great importance for foreign language learning. The most profilic way to to convey phraseological phrases is by approaching them in their original contexts in the target language and identify their possible exisisting equivalents in the source langauge. The aim of the article is to identify and describe the differences and similarities between contemporary German and Turkish phraseological phrases from the viewpoint of contrastive interlanguage analysis, seen in German horoscope texts. Therefore the object of this study is to compare the usage of phraseology and to illustrate them by typical examples, and equivalence relations on Turkish within German horoscope texts. The findings obtained from a small corpus analysis are expected to help improve the phraseological skills of German learners or its related materials in Turkey.

  15. Paleotempestological chronology developed from gas ion source AMS analysis of carbonates determined through real-time Bayesian statistical approach

    Science.gov (United States)

    Wallace, D. J.; Rosenheim, B. E.; Roberts, M. L.; Burton, J. R.; Donnelly, J. P.; Woodruff, J. D.

    2014-12-01

    Is a small quantity of high-precision ages more robust than a higher quantity of lower-precision ages for sediment core chronologies? AMS Radiocarbon ages have been available to researchers for several decades now, and precision of the technique has continued to improve. Analysis and time cost is high, though, and projects are often limited in terms of the number of dates that can be used to develop a chronology. The Gas Ion Source at the National Ocean Sciences Accelerator Mass Spectrometry Facility (NOSAMS), while providing lower-precision (uncertainty of order 100 14C y for a sample), is significantly less expensive and far less time consuming than conventional age dating and offers the unique opportunity for large amounts of ages. Here we couple two approaches, one analytical and one statistical, to investigate the utility of an age model comprised of these lower-precision ages for paleotempestology. We use a gas ion source interfaced to a gas-bench type device to generate radiocarbon dates approximately every 5 minutes while determining the order of sample analysis using the published Bayesian accumulation histories for deposits (Bacon). During two day-long sessions, several dates were obtained from carbonate shells in living position in a sediment core comprised of sapropel gel from Mangrove Lake, Bermuda. Samples were prepared where large shells were available, and the order of analysis was determined by the depth with the highest uncertainty according to Bacon. We present the results of these analyses as well as a prognosis for a future where such age models can be constructed from many dates that are quickly obtained relative to conventional radiocarbon dates. This technique currently is limited to carbonates, but development of a system for organic material dating is underway. We will demonstrate the extent to which sacrificing some analytical precision in favor of more dates improves age models.

  16. A New Approach to Sap Flow Measurement Using 3D Printed Gauges and Open-source Electronics

    Science.gov (United States)

    Ham, J. M.; Miner, G. L.; Kluitenberg, G. J.

    2015-12-01

    A new type of sap flow gauge was developed to measure transpiration from herbaceous plants using a modified heat pulse technique. Gauges were fabricated using 3D-printing technology and low-cost electronics to keep the materials cost under $20 (U.S.) per sensor. Each gauge consisted of small-diameter needle probes fastened to a 3D-printed frame. One needle contained a resistance heater to provide a 6 to 8 second heat pulse while the other probes measured the resultant temperature increase at two distances from the heat source. The data acquisition system for the gauges was built from a low-cost Arduino microcontroller. The system read the gauges every 10 minutes and stored the results on a SD card. Different numerical techniques were evaluated for estimating sap velocity from the heat pulse data - including analytical solutions and parameter estimation approaches . Prototype gauges were tested in the greenhouse on containerized corn and sunflower. Sap velocities measured by the gauges were compared to independent gravimetric measurements of whole plant transpiration. Results showed the system could measure daily transpiration to within 3% of the gravimetric measurements. Excellent agreement was observed when two gauges were attached the same stem. Accuracy was not affected by rapidly changing transpiration rates observed under partly cloudy conditions. The gauge-based estimates of stem thermal properties suggested the system may also detect the onset of water stress. A field study showed the gauges could run for 1 to 2 weeks on a small battery pack. Sap flow measurements on multiple corn stems were scaled up by population to estimate field-scale transpiration. During full canopy cover, excellent agreement was observed between the scaled-up sap flow measurements and reference crop evapotranspiration calculated from weather data. Data also showed promise as a way to estimate real-time canopy resistance required for model verification and development. Given the low

  17. Equivalent drawbead performance in deep drawing simulations

    NARCIS (Netherlands)

    Meinders, Vincent T.; Geijselaers, Hubertus J.M.; Huetink, Han

    1999-01-01

    Drawbeads are applied in the deep drawing process to improve the control of the material flow during the forming operation. In simulations of the deep drawing process these drawbeads can be replaced by an equivalent drawbead model. In this paper the usage of an equivalent drawbead model in the

  18. On uncertainties in definition of dose equivalent

    International Nuclear Information System (INIS)

    Oda, Keiji

    1995-01-01

    The author has entertained always the doubt that in a neutron field, if the measured value of the absorbed dose with a tissue equivalent ionization chamber is 1.02±0.01 mGy, may the dose equivalent be taken as 10.2±0.1 mSv. Should it be 10.2 or 11, but the author considers it is 10 or 20. Even if effort is exerted for the precision measurement of absorbed dose, if the coefficient being multiplied to it is not precise, it is meaningless. [Absorbed dose] x [Radiation quality fctor] = [Dose equivalent] seems peculiar. How accurately can dose equivalent be evaluated ? The descriptions related to uncertainties in the publications of ICRU and ICRP are introduced, which are related to radiation quality factor, the accuracy of measuring dose equivalent and so on. Dose equivalent shows the criterion for the degree of risk, or it is considered only as a controlling quantity. The description in the ICRU report 1973 related to dose equivalent and its unit is cited. It was concluded that dose equivalent can be considered only as the absorbed dose being multiplied by a dimensionless factor. The author presented the questions. (K.I.)

  19. Orientifold Planar Equivalence: The Chiral Condensate

    DEFF Research Database (Denmark)

    Armoni, Adi; Lucini, Biagio; Patella, Agostino

    2008-01-01

    The recently introduced orientifold planar equivalence is a promising tool for solving non-perturbative problems in QCD. One of the predictions of orientifold planar equivalence is that the chiral condensates of a theory with $N_f$ flavours of Dirac fermions in the symmetric (or antisymmetric...

  20. 7 CFR 1005.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1005.54 Section 1005.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1005.54 Equivalent price. See § 1000.54. Uniform Prices ...

  1. 7 CFR 1126.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1126.54 Section 1126.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1126.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  2. 7 CFR 1001.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1001.54 Section 1001.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1001.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  3. 7 CFR 1032.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1032.54 Section 1032.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1032.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  4. 7 CFR 1124.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1124.54 Section 1124.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Class Prices § 1124.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  5. 7 CFR 1030.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1030.54 Section 1030.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1030.54 Equivalent price. See § 1000.54. ...

  6. 7 CFR 1033.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1033.54 Section 1033.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1033.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  7. 7 CFR 1131.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1131.54 Section 1131.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1131.54 Equivalent price. See § 1000.54. Uniform Prices ...

  8. 7 CFR 1006.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1006.54 Section 1006.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1006.54 Equivalent price. See § 1000.54. Uniform Prices ...

  9. 7 CFR 1007.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1007.54 Section 1007.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1007.54 Equivalent price. See § 1000.54. Uniform Prices ...

  10. 7 CFR 1000.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1000.54 Section 1000.54 Agriculture... Prices § 1000.54 Equivalent price. If for any reason a price or pricing constituent required for computing the prices described in § 1000.50 is not available, the market administrator shall use a price or...

  11. Finding small equivalent decision trees is hard

    NARCIS (Netherlands)

    Zantema, H.; Bodlaender, H.L.

    2000-01-01

    Two decision trees are called decision equivalent if they represent the same function, i.e., they yield the same result for every possible input. We prove that given a decision tree and a number, to decide if there is a decision equivalent decision tree of size at most that number is NPcomplete. As

  12. What is Metaphysical Equivalence? | Miller | Philosophical Papers

    African Journals Online (AJOL)

    Theories are metaphysically equivalent just if there is no fact of the matter that could render one theory true and the other false. In this paper I argue that if we are judiciously to resolve disputes about whether theories are equivalent or not, we need to develop testable criteria that will give us epistemic access to the obtaining ...

  13. EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS

    NARCIS (Netherlands)

    LUIJBEN, TCW

    1991-01-01

    Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank

  14. Equivalence in Ventilation and Indoor Air Quality

    Energy Technology Data Exchange (ETDEWEB)

    Sherman, Max; Walker, Iain; Logue, Jennifer

    2011-08-01

    We ventilate buildings to provide acceptable indoor air quality (IAQ). Ventilation standards (such as American Society of Heating, Refrigerating, and Air-Conditioning Enginners [ASHRAE] Standard 62) specify minimum ventilation rates without taking into account the impact of those rates on IAQ. Innovative ventilation management is often a desirable element of reducing energy consumption or improving IAQ or comfort. Variable ventilation is one innovative strategy. To use variable ventilation in a way that meets standards, it is necessary to have a method for determining equivalence in terms of either ventilation or indoor air quality. This study develops methods to calculate either equivalent ventilation or equivalent IAQ. We demonstrate that equivalent ventilation can be used as the basis for dynamic ventilation control, reducing peak load and infiltration of outdoor contaminants. We also show that equivalent IAQ could allow some contaminants to exceed current standards if other contaminants are more stringently controlled.

  15. Beyond Language Equivalence on Visibly Pushdown Automata

    DEFF Research Database (Denmark)

    Srba, Jiri

    2009-01-01

    We study (bi)simulation-like preorder/equivalence checking on the class of visibly pushdown automata and its natural subclasses visibly BPA (Basic Process Algebra) and visibly one-counter automata. We describe generic methods for proving complexity upper and lower bounds for a number of studied...... preorders and equivalences like simulation, completed simulation, ready simulation, 2-nested simulation preorders/equivalences and bisimulation equivalence. Our main results are that all the mentioned equivalences and preorders are EXPTIME-complete on visibly pushdown automata, PSPACE-complete on visibly...... one-counter automata and P-complete on visibly BPA. Our PSPACE lower bound for visibly one-counter automata improves also the previously known DP-hardness results for ordinary one-counter automata and one-counter nets. Finally, we study regularity checking problems for visibly pushdown automata...

  16. A Soft Computing Approach to Crack Detection and Impact Source Identification with Field-Programmable Gate Array Implementation

    Directory of Open Access Journals (Sweden)

    Arati M. Dixit

    2013-01-01

    Full Text Available The real-time nondestructive testing (NDT for crack detection and impact source identification (CDISI has attracted the researchers from diverse areas. This is apparent from the current work in the literature. CDISI has usually been performed by visual assessment of waveforms generated by a standard data acquisition system. In this paper we suggest an automation of CDISI for metal armor plates using a soft computing approach by developing a fuzzy inference system to effectively deal with this problem. It is also advantageous to develop a chip that can contribute towards real time CDISI. The objective of this paper is to report on efforts to develop an automated CDISI procedure and to formulate a technique such that the proposed method can be easily implemented on a chip. The CDISI fuzzy inference system is developed using MATLAB’s fuzzy logic toolbox. A VLSI circuit for CDISI is developed on basis of fuzzy logic model using Verilog, a hardware description language (HDL. The Xilinx ISE WebPACK9.1i is used for design, synthesis, implementation, and verification. The CDISI field-programmable gate array (FPGA implementation is done using Xilinx’s Spartan 3 FPGA. SynaptiCAD’s Verilog Simulators—VeriLogger PRO and ModelSim—are used as the software simulation and debug environment.

  17. Assessment of Pressure Sources and Water Body Resilience: An Integrated Approach for Action Planning in a Polluted River Basin.

    Science.gov (United States)

    Mirauda, Domenica; Ostoich, Marco

    2018-02-23

    The present study develops an integrated methodology combining the results of the water-quality classification, according to the Water Framework Directive 2000/60/EC-WFD, with those of a mathematical integrity model. It is able to analyse the potential anthropogenic impacts on the receiving water body and to help municipal decision-makers when selecting short/medium/long-term strategic mitigation actions to be performed in a territory. Among the most important causes of water-quality degradation in a river, the focus is placed on pollutants from urban wastewater. In particular, the proposed approach evaluates the efficiency and the accurate localisation of treatment plants in a basin, as well as the capacity of its river to bear the residual pollution loads after the treatment phase. The methodology is applied to a sample catchment area, located in northern Italy, where water quality is strongly affected by high population density and by the presence of agricultural and industrial activities. Nearly 10 years of water-quality data collected through official monitoring are considered for the implementation of the system. The sample basin shows different real and potential pollution conditions, according to the resilience of the river and surroundings, together with the point and diffuse pressure sources acting on the receiving body.

  18. Marine Actinobacteria as a source of compounds for phytopathogen control: An integrative metabolic-profiling / bioactivity and taxonomical approach.

    Directory of Open Access Journals (Sweden)

    Luz A Betancur

    Full Text Available Marine bacteria are considered as promising sources for the discovery of novel biologically active compounds. In this study, samples of sediment, invertebrate and algae were collected from the Providencia and Santa Catalina coral reef (Colombian Caribbean Sea with the aim of isolating Actinobateria-like strain able to produce antimicrobial and quorum quenching compounds against pathogens. Several approaches were used to select actinobacterial isolates, obtaining 203 strains from all samples. According to their 16S rRNA gene sequencing, a total of 24 strains was classified within Actinobacteria represented by three genera: Streptomyces, Micromonospora, and Gordonia. In order to assess their metabolic profiles, the actinobacterial strains were grown in liquid cultures, and LC-MS-based analyses from ethyl acetate fractions were performed. Based on taxonomical classification, screening information of activity against phytopathogenic strains and quorum quenching activity, as well as metabolic profiling, six out of the 24 isolates were selected for follow-up with chemical isolation and structure identification analyses of putative metabolites involved in antimicrobial activities.

  19. Point Source contamination approach for hydrological risk assessment of a major hypothetical accident from second research reactor at Inshas site

    International Nuclear Information System (INIS)

    Sadek, M.A.; Tawfik, F.S.

    2002-01-01

    The point source contamination mechanism and the deterministic conservative approach have been implemented to demonstrate the hazards of hydrological pollution due to a major hypothetical accident in the second research reactor at Inshas. The radioactive inventory is assumed to be dissolved in 75% of the cooling water (25% are lost) and comes directly into contact with ground water and moved down gradient. Five radioisotopes(I-129, Sr-90, Ru-106, Cs-134 and Cs-137) of the entire inventory are found to be highly durable and represent vulnerability in the environment. Their downstream spread indices; C max : maximum concentration at the focus of the moving ellipse, delta: pollution duration at different distances, A:polluted area at different distances and X min : safety distance from the reactor, were calculated based on analytical solutions of the convection-dispersion partial differential equation for absorbable and decaying species. The largest downstream contamination range was found for Sr-90 and Ru-106 but still no potential. The geochemical and hydrological parameters of the water bearing formations play a great role in buffering and limiting the radiation effects. These reduce the retention time of the radioisotopes several order of magnitudes in the polluted distances. Sensitivity analysis of the computed pollution ranges shows low sensitivity to possible potential for variations activity of nuclide inventory, dispersivity and saturated thickness and high sensitivity for possible variations in groundwater velocity and retention factors

  20. Assessment of Pressure Sources and Water Body Resilience: An Integrated Approach for Action Planning in a Polluted River Basin

    Directory of Open Access Journals (Sweden)

    Domenica Mirauda

    2018-02-01

    Full Text Available The present study develops an integrated methodology combining the results of the water-quality classification, according to the Water Framework Directive 2000/60/EC—WFD, with those of a mathematical integrity model. It is able to analyse the potential anthropogenic impacts on the receiving water body and to help municipal decision-makers when selecting short/medium/long-term strategic mitigation actions to be performed in a territory. Among the most important causes of water-quality degradation in a river, the focus is placed on pollutants from urban wastewater. In particular, the proposed approach evaluates the efficiency and the accurate localisation of treatment plants in a basin, as well as the capacity of its river to bear the residual pollution loads after the treatment phase. The methodology is applied to a sample catchment area, located in northern Italy, where water quality is strongly affected by high population density and by the presence of agricultural and industrial activities. Nearly 10 years of water-quality data collected through official monitoring are considered for the implementation of the system. The sample basin shows different real and potential pollution conditions, according to the resilience of the river and surroundings, together with the point and diffuse pressure sources acting on the receiving body.